We held our next tinyML Talks webcast. Ravishankar Sivalingam from Qualcomm AI Research presented tinyML: Enabling Ultra-low Power Always-On Computer Vision at Qualcomm on October 7, 2021.
IMPORTANT: Please register here
Achieving always-on computer vision in a battery-constrained device for TinyML applications is a challenging feat. To meet the requirements of computer vision at <1mW, innovation and end-to-end optimization is necessary across the sensor, custom ASIC components, architecture, algorithm, software, and custom trainable models. Qualcomm Technologies developed an always-on computer vision module that comprises a low-power monochrome qVGA CMOS image sensor and an ultra-low power custom SoC with dedicated hardware for computer vision algorithms. By challenging long-held assumptions in traditional computer vision, we are enabling new applications in mobile phones, wearables, and IoT. We also introduce always-on computer vision system training tools, which facilitate rapid training, tuning, and deployment of custom object detection models. This talk presents the Qualcomm QCC112 chip, use cases enabled by this device, and an overview of the training tools.
Ravishankar Sivalingam obtained his M.S. in Computer Science, M.S. in Electrical Engineering, and Ph.D. in Electrical Engineering from the University of Minnesota, Twin Cities, specializing in sparse modeling for computer vision. He was a founding member of the Computational Intelligence team at 3M Corporate R&D, Minnesota, where he applied machine learning and computer vision to applications in the diverse businesses operated by 3M, ranging from biometrics to dental & orthodontic products. He also worked at June Life, a startup bringing computer vision technology to the smart kitchen of the future. Currently, Ravi develops ML algorithms for ultra-low power computer vision at Qualcomm AI Research, in Santa Clara, CA.
Watch on YouTube:
Download presentation slides:
Feel free to ask your questions on this thread and keep the conversation going!