We’ve been doing some work lately that I hope is relevant to the wider community. As TinyML requires hardware (and the complexities that come with hardware, such as cross-compilation toolchains and driver issues) it’s a pain for people to get a peek on what is actually possible with the technology. This especially shows when you’re running workshops on the topic: you’re probably spending more time getting everyone’s environment set up than teaching about the wonders of ML and embedded systems.
But: do we actually need embedded hardware to demonstrate what is possible? It’s fantastic that we can run on a constrained MCU, but I don’t think it’s a requirement per-se when introducing the technology. So that got me thinking about the wonderful sensor device that everyone has in their pockets: a smartphone. We can demonstrate everything: from data collection (often not included in introduction courses because it’s so hard to properly do!), to model training, to running inferencing on there. And the same models / principles will apply when deploying back to a microcontroller as well.
Long story short: we’ve released a web-based mobile client that can do all that.
We hook into the accelerometer and microphone to do data collection, then use our existing tools (the same ones we use for embedded devices) to collect data, use a signal processing pipeline to extract features, use Keras / classical ML blocks to train a model, and then…
Cross-compile TensorFlow Lite Micro, with all the signal processing blocks, and your model into a WebAssembly package! This enables inferencing, using the exact same tools, on the phone again. The experience is pretty amazing.
Here’s a video:
The client is open source (Apache 2.0): https://github.com/edgeimpulse/mobile-client, and you can also just use it for data collection for your own TinyML pipeline. Just collect some data, and we give you the files directly through the API or via the Export tab on the dashboard.