We held our next tinyML Talks webcast. Dmitry Maslov from Seeed Studio presented Speech-to-intent model deployment to low-power low-footprint devices on August 31, 2021.
IMPORTANT: Please register here
A traditional approach to using speech for device control/user request fulfillment is first, to transcribe the speech to text and then parse the text to the commands/quarries in suitable format. While this approach offers a lot of flexibility in terms of vocabulary and/or applications scenarios, a combination of speech recognition model and dedicated parser is not suitable for constrained resources of micro-controllers.
A more efficient way is to directly parse user utterances into actionable output in form of intent/slots. In this presentation I will share techniques to train a specific domain speech-to-intent model and deploy it to Cortex M4F based development board with built-in microphone, Wio Terminal from Seeed Studio.
Dmitry Maslov is a machine learning engineer, working at Seeed Studio on machine learning applications for embedded devices, both MCUs and SBCs. Recently he published a series of TinyML projects combined in a course, where he utilizes Edge Impulse/Tensorflow Lite for Microcontrollers to tackle challenging sensor data analysis tasks using Seeed Studio’s Wio Terminal as a reference hardware. He also runs the Hardware.ai YouTube channel that is focused on embedded ML and robotics.
Watch on YouTube:
Download presentation slides:
Feel free to ask your questions on this thread and keep the conversation going!