We held our tinyML Talks webcast with two presentations: Mark Stubbs from Shoreline IoT has presented Practical application of tinyML in battery powered anomaly sensors for predictive maintenance of industrial assets and Urmish Thakker from Arm has presented Pushing the limits of RNN Compression using Kronecker Products on August 18, 2020 at 8:00 AM and 8:30 AM Pacific Time.
Mark Stubbs (left) and Urmish Thakker (right)
Detecting anomalies in industrial equipment provides significant savings by preventing unplanned downtime and costly repairs due to unnoticed trends towards complete failure. Problems are detected and corrected early to attain maximum useful life from an asset. The combination of tinyML, low power wireless, integrated sensors, and IoT cloud enables a low cost and easy to install system to monitor industrial assets distributed throughout a factory. In this talk, we will present how tinyML is utilized for anomaly detection along with other sensor techniques to create a long life battery optimized solution for condition based maintenance in industry. We will also show a live demonstration of a tinyML-based end-to-end system solution.
CTO and Co-Founder at Shoreline IoT leading a team to revolutionize data gathering and predictive maintenance in industrial applications. Formerly worked in IoT at Google and Echelon to build networked systems to gather sensor information and make it actionable. Mark Stubbs also worked as Systems Engineer at Apple.
This talk gives an overview of our work in exploring Kronecker Products (KP) to compress sequence based neural networks. The talk is divided into two parts. In the first part we show that KP can compress IoT RNN Applications by 15-38x compression factors, achieving better results than traditional compression methods. This talk covers a quick tutorial on KP and the best methodology for using KP to compress IoT workloads. However when KP is applied to large Natural Language Processing tasks, it leads to significant accuracy loss (approx 26%). The second part of the talk addresses this issue. We show a way to recover accuracy otherwise lost when applying KP compression to large NLP tasks using a novel technique that we call doping. Doping is a process of adding an extremely sparse overlay matrix on top of the pre-defined KP structure. We call the resultant compression method doped kronecker product (DKP). We present experimental results that demonstrate compression of a large language model with LSTM layers of size 25 MB by 25x with 1.4% loss in perplexity score using DKP and show that it outperforms other traditional compression technique.
Urmish Thakker is a Senior Research Engineer working at Arm’s ML Research Lab. His research focuses on efficient execution of neural networks on Arm devices. Specifically, he works on model quantization, pruning, structured matrices and low-rank decomposition. His work at Arm has led to multiple patents and publications. Prior to working at Arm, he has worked with AMD, Texas Instruments and Broadcom as performance modelling, design and verification engineer contributing to the development of multiple products. Urmish graduated with a Master’s Degree in Computer Science from University of Wisconsin Madison in USA and a Bachelor’s Degree in Electrical Engineering from Birla Institute of Technology and Science in India.
==========================
Watch on YouTube:
Mark Stubbs
Urmish Thakker
Download presentation slide:
Mark Stubbs
Urmish Thakker
Feel free to ask your questions on this thread and keep the conversation going!