Two tinyML Talks on September 1, 2020 by Suren Jayasuriya from Arizona State University and Kristofor Carlson from BrainChip Inc

Announcing our next tinyML Talks webcast! Suren Jayasuriya from Arizona State University will present Towards Software-Defined Imaging: Adaptive Video Subsampling for Energy-Efficient Object Tracking on September 1, 2020 at 8:00 AM Pacific Time and Kristofor Carlson from BrainChip Inc. will present The Akida Neural Processor: Low Power CNN Inference and Learning at the Edge on September 1, 2020 at 8:30 AM Pacific Time.

IMPORTANT: Please register here


Suren Jayasuriya (left) and Kristofor Carlson (right)

CMOS image sensors have become more computational in nature including region-of-interest (ROI) readout, high dynamic range (HDR) functionality, and burst photography capabilities. Software-defined imaging is the new paradigm, modeling similar advances of radio technology, where image sensors are increasingly programmable and configurable to meet application-specific needs. In this talk, we present a suite of software-defined imaging algorithms that leverage CMOS sensors’ ROI capabilities for energy-efficient object tracking. In particular, we discuss how adaptive video subsampling can learn to jointly track objects and subsample future image frames in an online fashion. We present software results as well as FPGA accelerated algorithms that achieve video rate performance in their latency. Further, we highlight emerging work on using deep reinforcement learning to perform adaptive video subsampling during object tracking. All this work points to the software-hardware co-design of intelligent image sensors in the future.

Suren Jayasuriya is an assistant professor at Arizona State University, in the School of Arts, Media and Engineering and Electrical, Computer and Energy Engineering. Before this, he was a postdoctoral fellow at the Robotics Institute at Carnegie Mellon University, and he received his Ph.D. in electrical and computer engineering at Cornell University in 2017. His research interests are in computational imaging and photography, computer vision/graphics and machine learning, and CMOS image sensors.

The Akida event-based neural processor is a high-performance, low-power SoC targeting edge applications. In this session, we discuss the key distinguishing factors of Akida’s computing architecture which include aggressive 1 to 4-bit weight and activation quantization, event-based implementation of machine-learning operations, and the distribution of computation across many small neural processing units (NPUs). We show how these architectural changes result in a 50% reduction of MACs, parameter memory usage, and peak bandwidth requirements when compared with non-event-based 8-bit machine learning accelerators. Finally, we describe how Akida performs on-chip learning with a proprietary bio-inspired learning algorithm. We present state-of-the-art few-shot learning in both visual (MobileNet on mini-imagenet) and auditory (6-layer CNN on Google Speech Commands) domains.

Kristofor Carlson is a senior research scientist at BrainChip Inc. Previously, he worked as postdoctoral scholar in Jeff Krichmar’s cognitive robotics laboratory at UC Irvine where he studied unsupervised learning rules in spiking neural networks (SNNs), the application of evolutionary algorithms to SNNs, and neuromorphic computing. Afterwards, he worked as postdoctoral appointee at Sandia National Laboratories where he applied uncertainty quantification to computational neural models and helped develop neuromorphic systems. In his current role, he is involved in the design and optimization of both machine learning algorithms and hardware architecture of BrainChip’s latest system on a chip, Akida.

==========================

IMPORTANT: Please register here

Once registered, you will receive a link and dial in information to Zoom teleconference by email, that you can also add to your calendar.

We encourage you to register earlier since on-line broadcast capacity may be limited.

Note: tinyML Talks slides and videos will be available here on the tinyML Forum and the tinyML YouTube Channel afterwards for those who missed the live session. Please take a moment and subscribe to the YouTube channel today

Feel free to ask your questions on this thread and keep the conversation going!