Pytorch for tinyML

I have been using PyTorch for deep learning. For the time being, is PyTorch not the option for tinyML? What the workflow would be if I want to start with PyTorch?

1 Like

Disclaimer : I am learning on the way, so grain of salt / correct me flags on.

One way would be to write something equivalent of TF lite Micro which can consume Pytorch model, or even better ONNX model formats. Did not have the answer from Pete Warden meetup if there would support for ONNX models in TF lite Micro.

Maybe one option to get started would be to :
export model into ONNX
import it into keras with the library onnx2keras (not that much used / stars so dunno about maturity)
save the keras model
import the keras model with TF lite micro

But I guess you would face many problems at each steps.

Maybe there are better ways

@lcetinsoy Thanks for sharing. I didn’t have a good experience of converting the model between different platforms. That said hopefully the operations required by tinyML models are simple enough for the conversion.

Remark : onnxruntime is written in C++ and has both a C and C++ API.

However I guess it is not as small as TF lite Micro and would not fit in 20kb but maybe it fits in a PI or Jetson Nano

1 Like

I do a lot of work with ML on iOS devices where the runtime is (usually) Core ML. It’s often a lot of hassle to convert models from <insert framework here> to Core ML because each framework has its own rules for doing things. For example, padding is different between PyTorch and TF when stride == 2, and so on. Even though we have automated conversion tools, often they don’t work.

For tinyML, I can imagine it makes sense to create a tool that describes the architecture of a TF Lite Micro model (i.e. “these are the ops and this is how they are connected”) and also how to map the weights from a trained model to the ops in this architecture, even if that model was trained with another framework.

What I have in mind here is not an automatic conversion tool (although some of it could be automated) but a drag&drop tool that lets you construct the TF Lite Micro model layer-by-layer, and then make a link between each layer and the weights from the trained PyTorch model (or wherever it came from).

This should be possible because tinyML supports only a limited number of operations, and models won’t be huge. Writing an automated conversion tool is very labor intensive, but having a tool that lets you say, “for this conv layer use those weights from the PyTorch model”, etc might just hit the sweet spot. (Or not, just daydreaming here. :wink:)

Of course, you can already do this with some Python code: load the PyTorch model’s weights, and construct a FlatBuffers file for TF Lite Micro that has all the layers in it. But I’m aiming for something a bit more user-friendly here.

one can save PyTorch model in onnx as below:

torch.onnx.export(torch_model,  model_input, onnx_model_file)

and use bit.ly/deep-C to compile it to micro-controllers like arduino. deepC produces smaller code, with half the peak memory required.

Good luck.

Here is a brief talk on how to bring ONNX and PyTorch models to your microcontroller

2 Likes

Jetsins have a “native” inference runtime called TensorRT that’s optimized for the hardware. There’s plenty of documentation on NVIDIA’s developer website if you want to go that route.

Hi folks.
I wondered if anyone had success putting their PyTorch model onto an Arduino. I have a trained DETR model from Pytorch that I would like to put in an Arduino (WAN 1310). Any help would me much appreciated.

Thank you.

Also here is a link to a Deep Learning discussion group in Discord.

hey come check out Discord with me Deep Learning and Microcontrollers