tinyML On Device Learning Forum 2023

We held tinyML On Device Learning Forum 2023 on May 16th, 2023.

May 16 forum

To date, most ultra-low power machine learning (ML) applications at the edge are trained “off device” (typically in the cloud where virtually unlimited computing assets are available) while the edge devices perform the inferencing. Many successful applications have been deployed in this fashion as demonstrated by the rapid growth of the tinyML community and the support from the industry.

It’s time to move to the next milestone: On Device Learning (ODL). The ambition is to replace off-device training with localized training and adaptive “intelligence”. Industry and academic experts are actively exploring how to better fit the edge devices and applications into time-varying environments in which they are expected to be deployed for a long time.

To support and further accelerate this ground-breaking evolution, the tinyML Foundation created the On Device Learning (ODL) working group which enthusiastically started its activity. The ODL working group is very excited to invite everyone to join the first-ever virtual event to learn from experts about their current research and state-of-the-art solutions for ODL!

Never before has ML been characterized by such innovative waves of technology. And the tinyML Foundation is accelerating the growth of this vibrant ecosystem of skills and technology resulting in new applications and end uses.

Merging insights from artificial and biological neural networks for neuromorphic edge intelligence
Charlotte Frenkel, Assistant professor, Delft University of Technology

The development of efficient bio-inspired training algorithms and adaptive hardware is currently missing a clear framework. Should we start from the brain computational primitives and figure out how to apply them to real-world problems (bottom-up approach), or should we build on working AI solutions and fine-tune them to increase their biological plausibility (top-down approach)? In this talk, we will see why biological plausibility and hardware efficiency are often two sides of the same coin, and how neuroscience- and AI-driven insights can cross-feed each other toward low-cost on-device learning.

Charlotte Frenkel received the Ph.D. degree from Université catholique de Louvain (UCLouvain), Belgium, in 2020. After a postdoc at the Institute of Neuroinformatics, UZH and ETH Zürich, Switzerland, she joined Delft University of Technology, The Netherlands, as an Assistant Professor in July 2022. Her research focuses on neuromorphic integrated circuit design and learning algorithms for adaptive edge computing. She received a best paper award at the IEEE ISCAS 2020 conference, as well as the FNRS Nokia Bell Labs Scientific Award, the FNRS IBM Innovation Award and the UCLouvain/ICTEAM Best Thesis Award for her Ph.D. thesis. She serves as a TPC member for the tinyML Research Symposium and for the IEEE ESSCIRC, ISLPED, and DATE conferences.

Forward Learning with Top-Down Feedback: Solving the Credit Assignment Problem without a Backward Pass
Giorgia Dellaferrera, Researcher, Institute of Neuroinformatics Zurich

Supervised learning in artificial neural networks typically relies on backpropagation, where the weights are updated based on the error-function gradients and sequentially propagated from the output layer to the input layer. Although this approach has proven effective in a broad domain of applications, it lacks biological plausibility in many regards, including the weight symmetry problem, the dependence of learning on nonlocal signals, the freezing of neural activity during error propagation, and the update locking problem. Alternative training schemes have been introduced, including sign symmetry, feedback alignment, and direct feedback alignment, but they invariably rely on a backward pass that hinders the possibility of solving all the issues simultaneously. “Forward-only” algorithms, which train neural networks while avoiding a backward pass, have recently gained attention as a way of solving the biologically unrealistic aspects of backpropagation. In this talk, we discuss PEPITA and the Forward-Forward algorithm, which train artificial neural networks by replacing the backward pass of the backpropagation algorithm with a second forward pass. In the second pass, the input signal is modulated based on the top-down error of the network (PEPITA) or by other input samples (Forward-Forward). We show that these learning rules comprehensively address all the above-mentioned issues and can be applied to train both fully connected and convolutional models on datasets such as MNIST, CIFAR-10, and CIFAR-100. Furthermore, as they do not require precise knowledge of the gradients, nor any non-local information, “Forward-only” algorithms are well-suited for implementation in neuromorphic hardware.

Giorgia Dellaferrera has completed her PhD in computational neuroscience at the Institute of Neuroinformatics (ETH Zurich and the University of Zurich) and IBM Research Zurich with Prof. Indiveri, Prof. Eleftheriou and Dr. Pantazi. Her doctoral thesis focused on the interplay between neuroscience and artificial intelligence, with an emphasis on learning mechanisms in brains and machines. During her PhD, she visited the lab of Prof. Kreiman at the Harvard Medical School (US), where she developed a biologically inspired training strategy for artificial neural networks. Before her PhD, Giorgia obtained a master in Applied Physics at the Swiss Federal Institute of Technology Lausanne (EPFL) and worked as an intern at the Okinawa Institute of Science and Technology, Logitech, Imperial College London, and EPFL.

NeuroMem®, Ultra Low Power hardwired incremental learning and parallel pattern recognition
Guy Paillet, Co-founder and Chairman, General Vision Holdings

GV will present a Tiny RTML platform comprising of ST Nucleo64, together with a NeuroShield including 37 parallelized NM500 chips. This allows maintaining a parallel content addressable set of for example 21,000 Chinese characters. Submitting the image (16 x 16 pixels pattern) of a Chinese character, will return a category pointing on the English meaning within a constant search time of 30 microseconds. Learning time for additional character (on the spot learning) will also take about 30 microseconds per unknown character. The ANM5500 just released will make the same with only 4 chips and 5 times faster always, at milliwatts power. General Vision goal is to solve real world image recognition with learning and recognition on a small battery into for example a standalone (no network connection) Barbie doll, hence the patented “Monolithic Image Perception Device” successor of MTVS (Miniature Trainable Vision Sensor) allowing on “image sensor learning” and recognition.

Guy’s background is hardware design since 1976 starting with Motorola MC6800 as application engineer. He has been innovating on high performance Tiny Machine Learning since 1993 while inventing the ZISC36 with IBM Paris, Guy and family moved from France in 1996 and co-founder General Vision in 2000. Since, General has licensed its NeuroMem ZISC technology giving birth to 4 additional successful Neuromorphic AISC from 2007 to 2022, including the Intel Curie for “NeuroMEMS.”

On-Chip Learning and Implementation Challenges with Oscillatory Neural Networks
Aida Todri-Sanial, Professor, Department of Electrical Engineering, Eindhoven University of Technology

Research on adaptive and continuous learning, beyond supervised or unsupervised learning, is becoming of main interest to train neural networks that evolve with the environment and input data change through time. Moreover, ongoing research efforts on brain-inspired computing provide an energy efficient computing architecture implementable on edge devices. In recent years, computing with coupled oscillators or oscillatory neural networks (ONNs) presents an alternative computing paradigm with massive parallelism and energy efficiency. Most of the research efforts on ONNs are focused on hardware implementation such as materials, devices, circuit design, digital, analog, mixed-signal, and benchmarking AI applications. In this talk, I will focus mainly on how to train ONNs and possible implementations for on-chip learning which considers ONN topology and synaptic connections.

Aida Todri-Sanial received the B.S. degree in electrical engineering from Bradley University, IL in 2001, M.S. degree in electrical engineering from Long Beach State University, CA, in 2003 and a Ph.D. degree in electrical and computer engineering from the University of California, Santa Barbara, in 2009. She is currently a Full Professor in Electrical Engineering Department at Eindhoven University of Technology, Netherlands and Director of Research for the French National Council of Scientific Research (CNRS). Dr. Todri-Sanial was a visiting fellow at the Cambridge Graphene Center and Wolfson College at the University of Cambridge, UK during 2016-2017. Previously, she was an R&D Engineer for Fermi National Accelerator Laboratory, IL. She has also held visiting research positions at Mentor Graphics, Cadence Design Systems, STMicroelectronics and IBM TJ Watson Research Center. Her research interests focus on emerging technologies and novel computing paradigms such as neuromorphic and quantum computing.

Online Learning TinyML for Anomaly Detection Based on Extreme Values Theory

Eduardo DOS SANTOS PEREIRA, Technology Expert III, Serviço Nacional de Aprendizagem Industrial São Paulo

Anomalies in a system are rare, extreme events that can have a significant impact. The Extreme Value Theory deals with these events, and it has inspired an unsupervised and online learning TinyML algorithm proposed in this paper. The algorithm uses the two-parameter Weibull distribution function to detect anomalies in discrete time series, and it runs on Microcontroller Units device. This algorithm has the potential to contribute to various industries, from manufacturing to healthcare, by enabling real-time monitoring and predictive maintenance. The ability to detect anomalies is crucial in many applications, including monitoring environmental and location parameters based on sensor readings. TinyML can be a powerful tool for detecting abnormal or anomalous behavior in real-time.

Eduardo S. Pereira holds a Ph.D. degree in Astrophysics from the Brazilian National Institute for Space Research (INPE). He has completed postdoctoral research in Cosmology (INPE), Computational Astronomy (University of São Paulo USP), and Artificial Intelligence (UNICAMP). He works as a Technology Specialist at SENAI in São José dos Campos, focusing on topics such as Artificial Intelligence, Embedded Systems, Computer Vision, Modeling, and Simulation of physical processes.

=========================

Watch on YouTube:
Charlotte Frenkel
Giorgia Dellaferrera
Guy Paillet
Aida Todri-Sanial
Eduardo Dos Santos Pereira

Download presentation slides:
Charlotte Frenkel
Giorgia Dellaferrera
Guy Paillet
Aida Todri-Sanial
Eduardo Dos Santos Pereira

Feel free to ask your questions on this thread and keep the conversation going!

How to download slides of on device learning workshop 2023?