tinyML Talks on January 12, 2021 “Getting Started with TinyML: Train and Deploy TinyML projects with Edge Impulse” by Daniel Situnayake

We held our next tinyML Talks webcast. Daniel Situnayake from Edge Impulse has presented Getting Started with TinyML: Train and Deploy TinyML projects with Edge Impulse on January 12, 2021.

Forum January 12

TinyML is a fast-growing and exciting field of Machine Learning that enables edge computing and data analytics at extremely low power, typically in the mW range and below. This allows for a variety of always-on use-cases and targeting battery-operated devices. It has the potential to be transformational for Africa in spaces/areas such as agriculture, wildlife conservation, transportation, manufacturing etc.
Edge Impulse is building tools for easier machine learning integration at the edge. These tools are free and easy to use for developers.

Daniel Situnayake will be teaching dozens of engineers/students/enthusiasts in Nigeria how to get started with tinyML using the Edge Impulse tool. Daniel is the Founding tinyML engineer at Edge Impulse. He’s co-author of the O’Reilly book tinyML, alongside Pete Warden. He previously worked on the Tensor Flow team at Google, and he co-founded Tiny Farms Inc., deploying machine learning on industrial scale insect farms.

==========================

Watch on YouTube:
Daniel Situnayake

Download presentation slide:
Daniel Situnayake

Feel free to ask your questions on this thread and keep the conversation going!

Thanks so much for watching the talk! There were a ton of questions and I worked with our CTO Jan to answer them:

What is the difference when changing the frequency? Does all devices support the same frequency’s or is it different between devices?

For fully supported devices we try to keep frequencies the same so we can mix and match datasets easily, but you’re free in the frequencies that you use. Higher frequency models typically give you more accuracy but are more computationally expensive.

What tools do you have to get the models running on unsupported MCUs like STM32s (Bluepill/BlackPill)

The C++ Library export gives you a library with zero external dependencies and can be compiled on anything with a C++ compiler. See for a wide variety of example build scripts “Deploying your impulse locally” on docs.edgeimpulse.com

If the feature explorer shows data points is within the same area, would that be a indication that a deeplearning model is not the best to use?

If the data already separates very well after signal processing it could be a good indication that deep learning wouldn’t be 100% necessary. Which is a good thing! If you can avoid deep learning you’ll have a really understandable system rather than partly a black box in your system. If that’s the outcome of your exploration with Edge Impulse we’d be very happy and you can use our DSP libraries to get to the same features on device.

Was all of the demo developed with the free layer of Edge Impulse?

Yes.

Does it mean that tinyML/Edge Impulse trained model is hardware dependent?

Sort of. The model runs on anything with a C++ compiler, but we use hardware acceleration on a variety of targets (like Arm MCUs and Synopsys DSPs) which drastically decrease inferencing time. Could be that the model is only feasible with those acceleration enabled.

Is it possible to export to ONNX?

You can export the TensorFlow SavedModel from the Dashboard page in the Studio, and convert that to ONNX.

Can the model be deployed to a Jetson device?

Yes, but only for the neural networks. You can download the full TensorFlow SavedModel or TFLite files from the project dashboard and run these on the Jetson.

Can Edge Impulse target other languages e.g Python, Ada, Rust, etc…?

Sort of. You can export to WebAssembly and then run it in anything with WebAssembly bindings, or export as C++ Library and then use C++ bindings from your favourite language. Or just build the model into a standalone executable (see “Running your impulse locally” in the docs) and invoke that.

Models deployed through arduino lib, does the arduino need to have ai/ml offloading chip or would it be possible to run it on all arduino devices?

No, runs on general purpose MCUs.

Looks like your services are free. THANK YOU! What is your business model?

Thanks! We have two revenue streams: silicon vendors pay some money to offset the cost of free developers, and we have an enterprise version for larger customers which offers collaboration on projects, more compute and tools to build large datasets.

Very nice talk! how do we collect data, train models and deploy the network for unsupported devices? what is the process?

Thanks! You can capture data from any device using the “Data forwarder” (see the docs), then deploy to any device via C++ Library export - that gives you a library which has no external dependencies and runs on anything with a modern compiler. Lots of examples of this in the docs too!

is it possible to use an Arduino ONE to collect sensor data and another boards for ML processing?

You mean Arduino UNO? Probably not, 8-bit MCUs are out of our realm. There’s plenty of 32-bit Arduinos that work very well though.

hi. Can i do un-supervised machine learning using edge impluse? If so, can you share more details on that.

A bit with anomaly detection, which is more or less unsupervised; but Edge Impulse is mainly used for supervised learning.

This is great! Is there a plan to use edge Impulse in schools/University?

Yeah, we already have a number of universities using Edge Impulse and welcome any usage in education at no cost.

*2021

Thanks for the share