We held our next tinyML Talks webcast. Yiran Chen from Duke University presented Software/Hardware Co-design for Tiny AI Systems on February 22, 2022.
The advancement of Artificial Intelligence (AI) and its swift deployment on resource-constrained tiny systems relies on both design quality and design efficiency of models. In this talk, we first introduce efficient AI models via hardware-friendly model compression and topology-aware Neural Architecture Search to optimize quality-efficiency trade-off on AI models. Then, we involve cross-optimization design and efficient distributed learning to brew swift and scalable AI systems with specialized hardware. Finally, we demonstrate the enhancement on quality-efficiency trade-off on alternative applications and scenarios, such as Electronic Design Automation (EDA) and Adversarial Machine Learning. Through these explorations, we present our vision on the future of the full stack of tiny AI solutions.
Yiran Chen is a Professor of the Department of Electrical and Computer Engineering at Duke University and serving as the director of the NSF AI Institute for Edge Computing Leveraging the Next-generation Networks (Athena). His group focuses on the research of new memory and storage systems, machine learning and neuromorphic computing, and mobile computing systems. Dr. Chen has published 1 book and more than 500 papers and has been granted 96 US patents. He received numerous awards such as the recent IEEE Computer Society Edward J. McCluskey Technical Achievement Award. He is now serving as the Editor-in-Chief of the IEEE Circuits and Systems Magazine. He is a Fellow of the ACM and IEEE and now serves as the chair of ACM SIGDA.
Watch on YouTube:
Download presentation slides:
Feel free to ask your questions on this thread and keep the conversation going!