We held our eleventh tinyML Talks webcast with two presentations: Tomer Malach from DSP Group has presented AI/ML SoC for Ultra-Low-Power Mobile and IoT devices and Aravind Natarajan from Qualcomm Technologies has presented Pushing the Limits of Ultra-low Power Computer Vision for tinyML Applications on July 21, 2020 at 8:00 AM and 8:30 AM Pacific Time.
Tomer Malach (left) and Aravind Natarajan (right)
TinyML deep neural networks (DNNs) are at the core of artificial intelligence (AI) research with proven inference applications in the areas of voice, audio, proximity, gesture, and imaging, just to mention a few. With so many applications, DNN AI models are evolving rapidly and becoming more complex. This complexity is mainly expressed by the increasing number of computations per second, as well as the larger required memory capacity, both of which conflict with TinyML’s low-power requirement. The challenge is exacerbated by the need to implement DNN AI models in real time on TinyML hardware. This requires higher memory-core bandwidth and multiple-accumulate (MAC) capability, while also again putting pressure on designers to increase power consumption, silicon area and memory size, despite being aware of the constraints around these parameters. To realize more fully the true potential of TinyML, what’s needed is a way to address all these design issues simultaneously. This Tiny ML talk will present how AI models, run on a best-fit hardware and software (HW/SW) architecture, combined with the compression and optimization methods, can relieve many AI core bottlenecks and expand the device’s capabilities. The AI compression approach helps to scale down megabyte (Mbyte) models to kilobyte (Kbyte) models, increases the memory utilization efficiency for reduced power consumption (and potentially smaller memory size), and run very complex models on a very small AI system-on-chip (SoC), resulting in an ultra-low-power AI device consuming only microwatts (µW) of power.
Tomer Malach is an AI Hardware architect at DSP Group and an M.Sc student at Ben-Gurion University of the Negev, Israel. Tomer received his B.Sc. degree (with honors) in electrical and computer engineering from Ben-Gurion University.
His current research focus is on compression by hardware of machine learning models for edge devices.
Achieving always-on computer vision in a battery-constrained device for TinyML applications is a challenging feat. To meet the requirements of computer vision at <1mW, innovation and end-to-end optimization is necessary across the sensor, custom ASIC components, architecture, algorithm, software, and custom trainable models. Qualcomm Technologies developed an always-on computer vision module that comprises a low-power monochrome qVGA CMOS image sensor and an ultra-low power custom SoC with dedicated hardware for computer vision algorithms. By challenging long-held assumptions in traditional computer vision, we are enabling new applications in mobile phones, wearables, and IoT. We also introduce always-on computer vision system training tools, which facilitate rapid training, tuning, and deployment of custom object detection models. This talk presents the Qualcomm QCC112 chip, use cases enabled by this device, and an overview of the training tools.
Aravind Natarajan is a Staff Engineer at Qualcomm AI Research, working on ultra-low power computer vision applications. Prior to joining Qualcomm, Aravind received his PhD in Computer Engineering from the University of Texas at Dallas, working on concurrent data structures. He is the author of multiple research papers and has been granted 6 patents.
Feel free to ask your questions on this thread and keep the conversation going!