Adaptive AI for a Smarter Edge - Sek Chai - April 14, 2020

We held our second tinyML Talks webcast! Sek Chai from Latent AI has presented Adaptive AI for a Smarter Edge on April 14, 2020 at 8 AM Pacific Time.

sek_announcement_gray

Low power and low memory requirements are fundamental and critical main challenges for TinyML applications. Sek Chai from Latent AI will discuss recent progress on quantization approaches to enable smart edge devices with minimal loss in accuracy. Sek will also provide an overview of customer-driven use cases for TinyML.

Sek Chai is the CTO and co-founder at Latent AI. In previous roles, Sek was the principal investigator for multiple DARPA/DoD projects at SRI International, and also held senior technical positions at Motorola Labs. He received his Ph.D. from Georgia Tech. Sek has spent most of his career focused on developing and evangelizing efficient computing for embedded vision.

Watch on YouTube
Download presentation slides

Feel free to ask your questions on this thread and keep the conversation going!

3 Likes

We had a lot of interesting questions today, and we covered a lot of them live.
I’ll add my thoughts here but Sek Chai and his colleagues from LatentAI are welcome to clarify further.

There were several questions about throttling of network utilization in Latent AI’s LEIP Adapt. From my understanding, and a quick skim of a paper on Toward Runtime-Throttleable Neural Networks, a smaller network is used to determine which parts of a larger network stay active. The utilization reduction can be obtained by reducing number of neurons in each layer and/or skipping layers entirely.

Sek - please add any other references or public material about this, as many attendees were interested in learning more.

Sek - could you please also comment on the level of difficulty in training such adaptive networks compared to typical DNN architectures.

There were several questions about LatentAI’s business model/cost, platforms supported, and other features. Please contact info@latentai.com for further information.

A few questions were regarding power-of-two quantization. In Sek’s “Quantization Approaches” slide, the quantization levels for the logarithmic (power-of-two) are all 2^n, where n = -m, …, m (m is determined by the largest absolute value of the input signal range). This makes multiplication a lot simpler since all multiplying 2^a by 2^b is just 2^(a+b), which can be achieved by shift operations entirely. A key paper is Convolutional Neural Networks using Logarithmic Data Representation. Another recent paper on Additive Powers-of-Two Quantization talks about an extension of this idea.


One last topic was regarding tutorials and frameworks for tinyML. Pete Warden’s talk video and slides from March 31st on getting started with tinyML is available with the links from this forum thread.
Also please check out tinyML’s YouTube channel and the tinyML Summit material from 2020 and 2019, accessible at https://www.tinyml.org/summit/.

2 Likes

@ravi Thanks for the follow up on the questions and moderating the presentation today. It was a great webcast!

2 Likes

Please fill out a tiny ML survey
https://bit.ly/3cgjTws

We are conducting a short survey to better understand how engineers and developers make design choices for their tiny ML systems. We will share survey results with those that participate.

@ravi (regarding LEIP Adapt): Here’s a blog article that Latent AI recently published with information about the Adaptive AI approach. There is also more details about the prototype application (video-based gesture recognition), and also references for additional reading.