tinyML Talks: Constrained Object Detection on Microcontrollers with FOMO

Описание к видео tinyML Talks: Constrained Object Detection on Microcontrollers with FOMO

Constrained Object Detection on Microcontrollers with FOMO

Shawn Hymel
Embedded machine learning developer relations engineer
Edge Impulse

Image classification has been a core focus of deep learning for many years. However, many computer vision applications require knowing where objects are in an image and the ability to count the number of objects, which goes far beyond simple image classification. This is where object detection comes in.

Object detection models are capable of finding objects of interest in an image and provide us details about those objects, such as their classification, location, size, relative distance from the camera, etc. A handful of object detection models, such as MobileNet V2 SSD and YOLOv5, are optimized for low-power systems, including smartphones and single board computers. However, most microcontrollers are still incapable of running such models due to their processing and memory limitations.

Edge Impulse has developed a new technique named “Faster Objects, More Objects” (FOMO) that performs constrained object detection on low-power devices, such as microcontrollers. FOMO provides the location of target objects in an image, but it does not give arbitrary bounding box information about the size or distance of objects. As a result, it requires up to 30x less processing power and memory than MobileNet V2 SSD or YOLOv5.

In this talk, we will describe object detection, how FOMO works, and provide a live demonstration of constrained object detection on a microcontroller.

Комментарии

Информация по комментариям в разработке