We held our second tinyML Talks webcast! Sek Chai from Latent AI has presented Adaptive AI for a Smarter Edge on April 14, 2020 at 8 AM Pacific Time.
Low power and low memory requirements are fundamental and critical main challenges for TinyML applications. Sek Chai from Latent AI will discuss recent progress on quantization approaches to enable smart edge devices with minimal loss in accuracy. Sek will also provide an overview of customer-driven use cases for TinyML.
Sek Chai is the CTO and co-founder at Latent AI. In previous roles, Sek was the principal investigator for multiple DARPA/DoD projects at SRI International, and also held senior technical positions at Motorola Labs. He received his Ph.D. from Georgia Tech. Sek has spent most of his career focused on developing and evangelizing efficient computing for embedded vision.
We had a lot of interesting questions today, and we covered a lot of them live.
I’ll add my thoughts here but Sek Chai and his colleagues from LatentAI are welcome to clarify further.
There were several questions about throttling of network utilization in Latent AI’s LEIP Adapt. From my understanding, and a quick skim of a paper on Toward Runtime-Throttleable Neural Networks, a smaller network is used to determine which parts of a larger network stay active. The utilization reduction can be obtained by reducing number of neurons in each layer and/or skipping layers entirely.
Sek - please add any other references or public material about this, as many attendees were interested in learning more.
Sek - could you please also comment on the level of difficulty in training such adaptive networks compared to typical DNN architectures.
There were several questions about LatentAI’s business model/cost, platforms supported, and other features. Please contact email@example.com for further information.
A few questions were regarding power-of-two quantization. In Sek’s “Quantization Approaches” slide, the quantization levels for the logarithmic (power-of-two) are all 2^n, where n = -m, …, m (m is determined by the largest absolute value of the input signal range). This makes multiplication a lot simpler since all multiplying 2^a by 2^b is just 2^(a+b), which can be achieved by shift operations entirely. A key paper is Convolutional Neural Networks using Logarithmic Data Representation. Another recent paper on Additive Powers-of-Two Quantization talks about an extension of this idea.
@ravi (regarding LEIP Adapt): Here’s a blog article that Latent AI recently published with information about the Adaptive AI approach. There is also more details about the prototype application (video-based gesture recognition), and also references for additional reading.