I’ve been looking for dev board/kit options that are compatible with low-power/cost cameras. Does anyone have a setup that works already, and if so, what did you choose and what were your criteria?
I’ve seen ESP-EYE and the OpenMV camera, but I am curious if there’s a camera sensor that I could hook up to most microcontrollers (to be honest I’d like to use the Teensy 4.0 for it’s very fast microcontroller chip, while I wait for the Cortex M55/Ethos U55 combo to release as a dev kit). I just have no idea how one goes about interacing a camera sensor to a microcontroller (and matching compatibility).
For an example of a complete computer vision application based on Convolutional Neural Network (CNN) on Cortex-M7 microcontroller, instead, you may consider the reference design FP-AI-VISION1 (available here: https://www.st.com/en/embedded-software/fp-ai-vision1.html ) built using the STM32H747I-DISCO kit.
The examples provided are food recognition applications based on a CNN derived from MobileNet CNN.
They recognize among 18 classes of common food such as pizza, hamburger, and Caesar salad.
The STM FP-AI-VISION1 example lists using the STM32F4DIS-CAM camera daughterboard with the STM32H747I-DISCO however the STM32F4DIS-CAM seems to be out of stock from all the places I checked; Farnell shows it back ordered with expected stock in July 2020. The STM32F4DIS-CAM seems to be connected via MIPI® DSI interface so perhaps there is another option for the STM32H747I-DISCO. The STM32H747I-DISCO does look like a nice dev kit though.
It does not seem to be consistent with identifying the difference between a person and not a person. Sometimes it is pretty good about the identification, but others it identifies a person where there is none, or not when there is.
I read the person_detection article/tutorial.
A model of 250 kilobyte that has a 19 seconds inference delay does not looking very promising for tinyml platforms…
On the other hand, is using an M7-based micro controller still considered as tinyml from the current consumption aspect?
Hello…i have adjusted lens Holder with Raspberry Pi. Using one those holders could be combined with a 3D printer part. If I were to go that route, I would need to order the holder first to get exact dimensions. However, I’m guessing a simple rectangular adapter plate might work, two holes to match the camera PCB and two holes to match the purchased lens holder.
Connecting an image sensor to a micro is not trivial. OpenMV cam is a great place to start so you get your feet wet then try something more custom…
Things to watch out for in my experience:
Voltage levels/protocol: low power image sensors (Himax/Pixart) tend to use a parallel port and 1.8V signal levels
Does the micro have a parallel port capture port? eg. the ESP32 uses the I2S in a funny mode to do this as implemented in the ESP-eye.
Sensor tuning: once you’re able to communicate with the sensor, you will need to tune it just right for your application. Most people stay with defaults but that wont be optimal for low power/image quality.
Any idea if they will be fine for doing TinyML. I use TensorflowLite? I am also interested in peoples experience with ESP32 boards that contain both the LED and Camera. Any opinions?