Watch Your STEPP: Semantic Traversability Estimation Using Pose Projected Features

Описание к видео Watch Your STEPP: Semantic Traversability Estimation Using Pose Projected Features

In this demo video, we introduce STEPP: Semantic Traversability Estimation using Pose Projected Features, a novel path planning approach for legged robots. STEPP leverages RGB images to identify traversable regions in unstructured terrains using a positive-unlabeled training methodology. By aligning egocentric images with odometry and projecting future poses onto segmented regions, we extract meaningful features from the DINOv2 model and process them through an encoder-decoder MLP for feature reconstruction.
This demo showcases real-world experiments in both indoor maze-like environments and outdoor forest trails, as well as simulated environments created in Unreal Engine. The video highlights how STEPP integrates with the CMU navigation stack to assign reconstruction costs and navigate challenging terrains.
Watch how STEPP generalizes across diverse environments, handling complex navigation tasks from maze structures to tall grass and forest obstacles.
The code is open sourced and can be found on our project website: https://rpl-cs-ucl.github.io/STEPP/

Комментарии

Информация по комментариям в разработке