tinyML Talks on August 29, 2022 “ScaleDowStudy Group: Optimisation Techniques: Knowledge Distillation” by Soham Chatterjee from Sleek Tech

We held our next tinyML Talks webcast. Soham Chatterjee from Sleek Tech presented ScaleDowStudy Group: Optimisation Techniques: Knowledge Distillation on August 29, 2022.

August 19 forum

Knowledge Distillation is a process of compressing information from a larger model(teacher) to a smaller model(student). This student model is trained using the predictions of a teacher model. This way the student model can be trained with unlabelled data, by using the teacher model to generate labels!
Join us on 29th August at 8 30 pm SGT to learn about Knowledge Distillation and try your hands at testing KD at the edge.

In this session, we will cover:

  1. Introduction to Knowledge Distillation
  2. Implementation of KD using Tensorflow and Pytorch
  3. Using ScaleDow for KD optimisation technique
  4. Test KD on an embedded device
  5. Resources and Research Papers

Soham is a machine learning engineer at Sleek Tech, Singapore. Previously, he was a research master’s Student at NTU where he did research on combining edge computing techniques with neuromorphic hardware to build optimized microcontrollers. He is also the instructor for Udacity Nanodegree “Intel Edge AI for IoT Developers”, where he taught how to optimize models for edge computing applications. Soham’s passion for TinyML and MLOps led him to combine the two to build tools and techniques to efficiently and easily deploy TinyML models including ScaleDown where he is a core developer.Apart from this, Soham is also the instructor for Udacity’s “Machine Learning Engineer with Microsoft Azure” Nanodegree and “AWS Machine Learning” Nanodegree.

=========================
Watch on YouTube:
Soham Chatterjee

Download presentation slides:
Soham Chatterjee

Feel free to ask your questions on this thread and keep the conversation going!