tinyML Talks on March 5, 2024 “Physics-Aware Auto Tiny Machine Learning” by Swapnil Sayan Saha

We held our next tinyML Talks webcast. Swapnil Sayan Saha from STMicroelectronics Inc. presented Physics-Aware Auto Tiny Machine Learning on March 5, 2024.

Tiny machine learning has enabled resource-constrained end devices to make intelligent inferences for time-critical and remote applications from unstructured data. However, realizing edge artificial intelligence systems that can perform long-term high-level reasoning and obey the underlying physics within the tight platform resource budget is challenging. The talk introduces the concept of neurosymbolic auto tiny machine learning, where the synergy of physics-based process models and neural operators is automatically co-optimized based on platform resource constraints. Neurosymbolic artificial intelligence combines the context awareness and integrity of symbolic techniques with the robustness and performance of machine learning models. The talk showcases how fast, gradient-free and black-box Bayesian optimization can automatically construct the most performant learning-enabled, physics, and context-aware intelligent programs from a search space containing neural and symbolic operators. Several previously unseen applications are showcased, including onboard physics-aware neural-inertial navigation, on-device human activity recognition, on-chip fall detection, neural-Kalman filtering, and co-optimization of neural and symbolic processes.

Swapnil Sayan Saha is an algorithm development engineer at STMicroelectronics Inc. He received his Ph.D. and M.S. in Electrical and Computer Engineering from the University of California, Los Angeles in 2023 and 2021 respectively, and B.Sc. in Electrical and Electronics Engineering from the University of Dhaka in 2019. His research explores how rich, robust, and complex inferences can be made from sensors onboard low-end embedded systems within tight resource budgets in a platform-aware fashion. To date, he has published more than 25 peer-reviewed articles/patents and received more than 30 awards in robotics, technical, and business-case forums worldwide.

=========================
Download presentation slides:
Swapnil Sayan Saha

Watch on YouTube:
Swapnil Sayan Saha

Feel free to ask your questions on this thread and keep the conversation going!

=========================

Q: Do these optimization functions inherently omit model reduction that could be seen with pruning and quantization? If so, does this mean we could be leaving out models that could have fit on the hardware with another step of model preparation before deployment?
A: TinyNS supports unstructured pruning and post-training quantization from Tensorflow. It treats pruning and quantization as a hyperparameter in the architecture search space.

Q:What are the neural symbolic operations used?
A: TinyNS supports 4 out of 5 neurosymboloc paradigms. 1. symbolic after neural 2. symbolic before neural 3. symbolic + neural 4. symbolic rules compiled during training

Q: Where and how did the physics of the problem come to the algorithm?
A: The symbolic portion of the neurosymbolic architecture injects physics in the neural pipeline.

Q: Which pruning policy are you supporting?
A: Unstructured pruning.

Q: Do we have to write seperate ML model for getting the feature importance, or there is some inbuilt function?
A: You don’t. However, if you have domain knowledge about the application, you can feed such feature selection algorithms as additional term in the NAS optimization function for improved convergence.

Q: How do you gain from pruning in inference and mapping the network?
A: Pruning cuts out redundant weights of a neural network with minimal accuracy loss. Unstructured pruning can help save memory by 3-4x.

Q: How does tinyML edgeAI compares with building the ML model in Matlab/simulink and embedding those in embedded firmware program?

A: MATLAB provides one out of many ways to convert a model from prototyping language (e.g., Python) to deployment language and program template (e.g., C). Most of these model converters in TinyML do runtime optimization, memory optimization, loop unrolling, model flatbuffer conversion, code generation, etc.

Q: Rather than rely on GPS to take over for long term drift, could you use it to recalibrate the Neural algorithm and continue it’s use?
A: We occasionally reset the inertial portion of the neural-Kalman filter with GPS readings.

Q: What was the Github Link plz
A: GitHub - nesl/neurosymbolic-tinyml: TinyNS: Platform-Aware Neurosymbolic Auto Tiny Machine Learning

Q: What type of framework exists for developpment integration with usecase specific chip sets that could provide feature data?
A: Nanoedge AI Studio, MEMS Studio, Edge Impulse, SensiML, Qeexo AutoML, RealityAI, Neuton.AI etc.

Q: Generally, how long does the optimization process take?
A: This is problem dependent. In our cases, it usually took 3-7 days.

Q: Can you list those tools in the chat?
A: Nanoedge AI Studio, MEMS Studio, Edge Impulse, SensiML, Qeexo AutoML, RealityAI, Neuton.AI etc.

Q: Does symbolic regression(SR) useful in embeded system?
A: Symbolic regression falls under symbolic after neural neurosymbolic paradigm, you can think of it as a decision tree. So yes, it is definitely helpful for many applications.