MVGrasp: Real-Time Multi-View 3D Object Grasping in Highly Cluttered Environments

Описание к видео MVGrasp: Real-Time Multi-View 3D Object Grasping in Highly Cluttered Environments

Nowadays robots play an increasingly important role in our daily life. In human-centered environments, robots often encounter piles of objects, packed items, or isolated objects. Therefore, a robot must be able to grasp and manipulate different objects in various situations to help humans with daily tasks. In this paper, we propose a multi-view deep learning approach to handle robust object grasping in human-centric domains. In particular, our approach takes a point cloud of an arbitrary object as an input, and then, generates orthographic views of the given object. The obtained views are finally used to estimate pixel-wise grasp synthesis for each object. We train the model end-to-end using a synthetic object grasp dataset and test it on both simulation and real-world data without any further fine-tuning. To evaluate the performance of the proposed approach, we performed extensive sets of experiments in four everyday scenarios, including isolated objects, packed items, pile of objects, and highly cluttered scenes.

Interactive Robot Learning Lab (IRL-Lab): https://www.ai.rug.nl/irl-lab/

Комментарии

Информация по комментариям в разработке