We held our next tinyML Talks webcast. Marcelo Rovai from TinyML4D group presented Unleashing the Power of the New XIAO ESP32S3 Sense: Tackling Anomaly Detection, Image Classification, and Keyword Spotting with TinyML on June 13, 2023.
As machine learning continues to evolve and integrate with embedded systems, TinyML emerges at the intersection, offering the potential to run machine learning models on low-power microcontrollers. This talk will delve into the fascinating world of Tiny Machine Learning (TinyML) using the a Thumb-Size ESP32 CAM Dev Board: Seeed Studio XIAO ESP32S3 Sense. We will explore three significant projects demonstrating the wide range of possibilities with TinyML. Our journey begins with an Anomaly Detection and Motion Classification project. Here, we will use an Inertial Measurement Unit (IMU) sensor to identify unusual patterns and classify various types of motion. We will discuss collecting and preprocessing sensor data, training a machine-learning model, and deploying it onto the ESP32S3 for real-time inference. Next, we will explore Image Classification, showing how the XIAO ESP32S3 Sense, with its built-in camera, can identify and classify objects. We will discuss the challenges of working with image data, including handling high dimensionality and data variability, and demonstrate how we overcame these challenges to build a robust classifier. Finally, we will conclude the talk with a project on Keyword Spotting. Using the built-in microphone of the XIAO ESP32S3 Sense, we will demonstrate how to train a model to recognize specific spoken keywords. This portion of the talk introduces sound classification, a compelling field with many applications, from voice assistants to environmental sound classification. Throughout the talk, we will use the Arduino IDE and Edge Impulse Studio to create and deploy the TinyML models. Attendees will gain insights into data collection, pre-processing, model design, and impulse design.
Marcelo Rovai is born in São Paulo and held a Master’s in Data Science from the Universidad del Desarrollo (UDD) in Chile and an MBA from IBMEC (INSPER) in Brazil. He graduated in 1982 as an Engineer from UNIFEI, Federal University of Itajubá, with a specialization from Escola Politécnica de Engenharia of São Paulo University (USP), both institutions located in Brazil. Rovai has experience as a teacher, engineer, and executive in several technology companies such as CDT/ETEP, AVIBRAS Aeroespacial, SID Informática, ATT-GIS, NCR, DELL, COMPAQ (HP), and more recently at IGT as a VP and a Senior Advisor for Latin America. Marcelo Rovai publishes articles about electronics on websites such as MJRoBot.org, Hackster.io, Instructables.com, and Medium.com. Furthermore, he is a volunteer Professor at the UNIFEI in Brazil and a lecturer at several Congresses and Universities on IoT and TinyML. He is an active member and a Co-Chair of the TinyML4D group, an initiative to bring TinyML education to developing countries.
Watch on YouTube:
Download presentation slides:
Feel free to ask your questions on this thread and keep the conversation going!
Q: Does that mean that training on PC is mandatory? The TinyML executes the classifier . (Anonymous Attendee)
Q: Is there some strategy to improve the model in real-time, something like a closed feedback loop? Or, when we implement the model, is it static? (Gabriel Alsace)
A : My experience is with TinyML models trained offline on extensive data and then used in production to make predictions. This offline model, once trained, is static and does not learn or adapt from new data it sees in production. But, regarding the question, sometimes, a tinyML model may be retrained periodically with new data, but this is not a real-time process. However, in principle, creating a “closed feedback loop” is possible, where the model learns and adapts in real-time from new data it sees in production. This is known as online learning, incremental learning, or adaptive learning, however, implementing online learning on a TinyML device can be challenging due to the constraints on power, memory, and computational resource. I saw a couple of papers where academics are studying the possibility:
- TinyML on-device neural network training (https://www.politesi.polimi.it/bitstream/10589/187690/6/TinyML_on_device_neural_network_training%20fin.pdf)
- TinyOL: TinyML with Online Learning on Microcontrollers (https://arxiv.org/pdf/2103.08295.pdf)
Q: Can you talk a little more about why an IMU model trained on data from one IMU is not as transferable to other IMU’s inference? (Ian Ingram)
A : As a “rule of thumb”, I would say that you should make inferences using the same type of sensor used for data collection. I could say that the problem of model transferability between different IMUs can be attributed to several factors, such as HW differences, calibration, and, most important, placement and orientation differences.
Q: What kind of Arduino can we use for TinyML? (Kabir Abidemi Bello)
A : For applications described in the Talk, I recommend any Arduino device with a 32bits CPU as the Arduino Nano 33 BLE Sense. Arduinos, as the UNO (8bits), can not be used with frameworks such as TensorFlow Lite Micro (used by Edge Impulse), but can support elementary and miniature models. Fraunhofer Institute (www.aifes.ai) has the AIfES (Artificial Intelligence for Embedded Systems) that works with 8 and 16 bits Arduino boards. I also know that Neuton.ai (Tiny AutoML platform) can be used with the UNO.
Q: Can I implement two classifiers in the same device (Nano BLE), one for binary classification and the other one for the multi-class model? (Anonymous Attendee)
A : It is possible, assuming you have enough memory and computational resources. One possible solution is to define a fixed “ARENA” (memory) to be used by both models (For example, a Cascade Multi-Tenant solution) . Edge Impulse is working to create a “Multi-impulse” ( https://docs.edgeimpulse.com/docs/tutorials/multi-impulse). They provide an example of building an intrusion detection system, where the first model is to detect glass-breaking sounds. If a sound is detected, a second model will then classify an image to see if there is a person or not in the image.
Q: What sort of model file should we export from the model for inference on the device? (Farhan Badr)
A: The model file exported from your TinyML training process for inference on a device will typically be in the TensorFlow Lite (TFLite) format.
The TensorFlow Lite Converter takes a TensorFlow model (for example, with weights as 32 bits float and converts it into a format optimized for on-device execution, quantized as an 8-bit Integer. Once you’ve trained your model using TensorFlow, you can use the TensorFlow Lite Converter to convert your model to a .tflite file. Files in this format can be directly used with MicroPython. In the examples that we saw in the Talk, this .tflite file was converted into a FlatBuffer format (C byte array), which is optimized to be deployed on the device (you can use the Linux command “xxd” to convert .tflite files to .cc files).