MSA- Visual SLAM (no lidar) on a farm tractor

Описание к видео MSA- Visual SLAM (no lidar) on a farm tractor

This robot is a hybrid (autonomous/human piloted) farm tractor with four cameras and an IMU. The robot has no lidar and no GPS/GNSS.

The video shows how we use image features for visual odometry, to build a map of the environment, and to localize (generate 6DOF pose estimates) within the map. You can see the blue crosses in the video identifying various features; cross size scales with image patch size.

Note how the four camera feeds have independently-controlled white balance and lots of image artifacts: dirt on the lenses, glare and reflections on all four cameras, and a bouncing seat in the FOV. Our software handles these without delaying the pose estimate latency or requiring massive compute.

The map view on the bottom shows keyframes being created and connected in a covisibility graph, along with the point cloud of the local sparse map currently being tracked in the images. The keyframes in the local map have little camera icons indicating the pose of the sensors at that moment.

Main Street Autonomy is a Pittsburgh-based software company providing calibration and localization/mapping software for on-road and off-road autonomy, both indoors and outdoors, on vehicles ranging from farm tractors, floor cleaning robots, tractor trailers, to drones. See our website for more information.

https://mainstreetautonomy.com/

Комментарии

Информация по комментариям в разработке