Kimera: from SLAM to Spatial Perception with 3D Dynamic Scene Graphs

Описание к видео Kimera: from SLAM to Spatial Perception with 3D Dynamic Scene Graphs

In this talk, I present several extensions to our previous work on 3D Dynamic Scene Graphs (DSGs):
Kimera-PGMO: a novel Simultaneous Pose-Graph and Mesh Optimization framework.
Hierarchical Semantic Path-planning on 3D Scene Graphs.
New open-source datasets: uHumans2 with 12 new scenes.
Run Kimera on real-life datasets to build real-life 3D DSGs.
Talk given at Prof. Scaramuzza's Robotics and Perception Group @ UZH.
Let me know what you think in the comments!

Paper: https://arxiv.org/abs/2101.06894
Relevant Code: https://github.com/MIT-SPARK/Kimera
Datasets:
-- uH: http://web.mit.edu/sparklab/datasets/...
-- uH2: http://web.mit.edu/sparklab/datasets/...

3D Dynamic Scene Graphs were presented at RSS 2020:
Paper: https://arxiv.org/abs/2002.06289
Video:    • 3D Dynamic Scene Graphs: Actionable S...  
Talk:    • 3D Dynamic Scene Graphs: a new mappin...  

Our first paper on Kimera was presented at ICRA 2020:
Paper: https://arxiv.org/abs/1910.02490
Video:    • Kimera: an Open-Source Library for Re...  
Talk: https://studio.youtube.com/video/kF1k...
Tutorial:    • Metric-Semantic SLAM with Kimera: A H...  

Contact info:
Twitter:   / rosinoltoni  
Linkedin:   / rosinol  
Google Scholar: https://scholar.google.ch/citations?u...

Abstract— Humans are able to form a complex mental model of the environment they move in. This mental model captures geometric and semantic aspects of the scene, describes the environment at multiple levels of abstractions (e.g., objects, rooms, buildings), includes static and dynamic entities and their relations (e.g., a person is in a room at a given time). In contrast, current robots' internal representations still provide a partial and fragmented understanding of the environment, either in the form of a sparse or dense set of geometric primitives (e.g., points, lines, planes, voxels) or as a collection of objects. This paper attempts to reduce the gap between robot and human perception by introducing a novel representation, a 3D Dynamic Scene Graph(DSG), that seamlessly captures metric and semantic aspects of a dynamic environment. A DSG is a layered graph where nodes represent spatial concepts at different levels of abstraction, and edges represent spatio-temporal relations among nodes. Our second contribution is Kimera, the first fully automatic method to build a DSG from visual-inertial data. Kimera includes state-of-the-art techniques for visual-inertial SLAM, metric-semantic 3D reconstruction, object localization, human pose and shape estimation, and scene parsing. Our third contribution is a comprehensive evaluation of Kimera in real-life datasets and photo-realistic simulations, including a newly released dataset, uHumans2, which simulates a collection of crowded indoor and outdoor scenes. Our evaluation shows that Kimera achieves state-of-the-art performance in visual-inertial SLAM, estimates an accurate 3D metric-semantic mesh model in real-time, and builds a DSG of a complex indoor environment with tens of objects and humans in minutes. Our final contribution shows how to use a DSG for real-time hierarchical semantic path-planning. The core modules in Kimera are open-source.

Комментарии

Информация по комментариям в разработке