tinyML Talks on January 25, 2022 “Oculi is putting the human eye in A.I.” by Charbel Rizk

We held our next tinyML Talks webcast. Charbel Rizk from Oculi Inc. presented Oculi is putting the human eye in A.I. on January 25, 2022.

January 25 forum

Oculi is putting the “Human Eye” in AI: machines outperform humans in most tasks but human vision remains far superior. Human vision provides the actionable signal in real time and consumes only mW’s. As biology and nature have been the inspiration for much of the technology innovations, developing imaging technology that mimics the human eye is the logical path. Also unlike photos and videos we collect for personal consumption, machine vision is not about pretty images and the most number of pixels. Machine vision should extract the “best” actionable information very efficiently (in time and energy) from the available signal (photons). At Oculi, we have developed a new architecture for computer and machine vision that promises efficiency on par with human vision but outperforms in speed. Oculi provides a single-chip vision solution combining sensing + pre-processing at the pixel, up to 30x better in energy-delay product enabling the most efficient vision solutions and is uniquely situated to make imaging technology feasible for the TinyML community and vision.

Dr. Rizk is the Founder CEO CTO of Oculi and an Associate Research Professor at Johns Hopkins ECE. He has been recognized as a top innovator, thought leader, and successful Principal Investigator / S&T manager. He was a pioneer in autonomous systems and led a small team that developed and demonstrated the first four-rotor (quad rotor) UAV system in the early 90’s. Dr. Rizk has successfully collaborated with various FFRDC’s, government labs, academia, and industry of various sizes. He is a senior member of IEEE, AIAA, and a member of AUVSI, EAA, and OSA.


Watch on YouTube:
Charbel Rizk

Download presentation slides:
Charbel Rizk

Feel free to ask your questions on this thread and keep the conversation going!

  1. Why the transfer between the perceive and sense is bi-directional?

The Oculi technology promises efficiency on par with human vision which is based on a combination of the eye and the brain. The efficiency of human vision starts with the human eye which is a sensor that has a lot more resolution than cameras but sends a lot less data over to the brain. This is only possible because of the two-way communication of signals and controls between the eye and the brain. The eye

sends information depending on what the brain is looking for and it changes dynamically and in real-time. This doesn’t exist in current technology. The architecture of the OCULI SPU™ has similar features to the human eye & brain combination to deliver this selectivity, efficiency (dynamic optimization), and speed.

  1. Is the overview or the IntelliPixel on slide 2 a top view or side view. Seems a lot of pixel surface is taken up by no. Photosensitive area.

The Oculi technology is modular and scalable, we have done monolithic designs where the fill factor is small but the S12 product line is 2-tier design where the fill factor is over 90%.

  1. What is the accuracy level for Oculi technology?

We’re not clear on the question but if it is in reference to AI/perception, we simply guarantee a reduction in bandwidth/latency/power while preserving relevant information. The SPU can dynamically and in realtime change the amount of data it is sending out from the smallest (actionable signal) up to full frame. If the question is related to the accuracy of the actionable signal (without any additional processing), we will meet the customer requirements.

  1. Are there ML backend in eye tracking and face detection or only SPU processing?

The demo we showed using the S11 prototype involved off-SPU processing and included some ML. However, we are focused on these types of use cases as they can be implemented effectively on a single chip solution (i.e. SPU).

  1. For this model how is the data stored or transformed?

Not sure about this question but suspect is not applicable.

  1. Does processing at pixel level create an overhead in terms of size?

The Oculi technology was optimized for machine vision where it is mostly about delivering the required information efficiently (time, energy, form factor, and cost). It is no longer about the megapixel race. The pixel size can be larger than a conventional design but the size ultimately depends on the requirements, in particular latency and/or power. As noted previously, the technology was developed to be flexible and is scalable with the CMOS process and 3D technology.

  1. What is the latency of a smart event?

Because of the parallel aspect of the SPU architecture, smart events are generated directly at the pixel and latency can be reduced to ~nano-sec. Depending on the number of smart events and the technology node, the main source of latency is associated with the I/O. We have fabricated in various nodes, the S11 was in 90-nm CMOS, with that chip, we were able to run the pixel at 112 MHz and track a laser modulated at 50MHz and estimate range/depth.

  1. Does the technology require initial treatment with more conventional machine learning convolution type processing?

No, the SPU performs the initial pre-processing required for the majority of image processing algorithms.

  1. How does performing the analog to digital conversion at the pixel level impact the area efficiency compared to the standard column-based techniques?

Please see comments for #6.

  1. What is your main use case? What applications buy your sensor in volume?

Oculi is focused on imaging applications requiring low latency, power, bandwidth, size and/or cost. Initial focus is XR & smart/Interactive display and our road map includes automotive, industrial robotics, smart cities, and consumer electronics. In addition to SPU product lines to be sold in high volumes to various channel partners, we are also pursuing strategic partnerships including IP licensing in certain


  1. Can you show some example code on how to program it?

We developed comprehensive, out of the box demo applications to visualize and experiment with the OCULI SPU. We also provide libraries (C++), code samples (C++, Python), and documentation to enable our customers and partners to evaluate and design solutions around the OCULI SPU. Please contact Joe, our Lead Technical Pre-sales at joe.maljian@oculi.ai if you are interested in knowing more.

  1. Does the SPU can work stand-alone, or it requires supporting processor chip?

It is dependent on the use case. The architecture and technology was developed to support both options. The OCULI SPU fitted with the IntelliPixel™ technology supports multiple output modes including actionable signals that enable stand-alone solutions. Our Vision Intelligence (VI) Platforms provide additional processing capabilities (Off-SPU) to enable the development of solutions and products that may

benefit from this type of architecture.

  1. Does Oculi collaborate with research institutions, and who’d be the contact for inquiries in that direction?

Yes! Please contact Joe, our Lead Technical Pre-sales at joe.maljian@oculi.ai for inquiries.