We held our next tinyML Talks webcast. Robert LiKamWa from Arizona State University has presented System support for efficient multi-resolution visual computing on low power embedded systems on September 10, 2020 at 8:00 AM Pacific Time.
The physical world, in all its complexity, has visual information that has varying spatial and temporal quality needs.
Low power systems could benefit from the ability to situationally sacrifice image resolution to save system energy when imaging detail is unnecessary for computer vision tasks. This talk will discuss challenges and opportunities of embedded operating systems and vision sensing pipeline architectures to flexibly support dynamic multi-resolution workloads with rapid reconfigurability at low latencies, and expressiveness to meet computational needs with minimal developer burden.
Robert LiKamWa is an assistant professor at Arizona State University, appointed in the School of Arts, Media and Engineering (AME) and the School of Electrical, Computer and Energy Engineering (ECEE). At ASU,LiKamWa directs Meteor Studio (http://meteor.ame.asu.edu), which explores the research and design of software and hardware for mobile Augmented Reality, Virtual Reality,Mixed Reality, and visual computing systems, and their ability to help people tell their stories. To this end, Meteor Studio’s research and design projects span three arcs: (i) advanced visual capture and processing systems, (ii)systems for hybrid virtual-physical immersion through augmentation of senses,and (iii) design frameworks for data-driven augmented reality and virtual reality storytelling and sensemaking.
Prior to coming to ASU, LiKamWa completed his bachelor’s, master’s and doctoral degrees at Rice University in the Department of Electrical and Computer Engineering.
LiKamWa has received an NSF CAREER Award, a Google Faculty Research Award, and a Best Paper Award at MobiSys 2013.
Watch on YouTube:
Download presentation slide:
Feel free to ask your questions on this thread and keep the conversation going!