[IROS 2023] EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation

Описание к видео [IROS 2023] EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation

MERL Researcher Siddarth Jain and MERL intern Baichuan Huang presented their paper titled "EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation" for the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023), held October 1 to 5, 2023 at Huntington Place in Detroit, USA. The paper was co-authored with Baichuan Huang and Jingjin Yu.

Paper: https://www.merl.com/publications/doc...

Abstract: We explore the dynamic grasping of moving objects through active pose tracking and reinforcement learning for hand-eye coordination systems. Most existing vision-based robotic grasping methods implicitly assume target objects are stationary or moving predictably. Performing grasping of unpredictably moving objects presents a unique set of challenges. For example, a pre-computed robust grasp can become unreachable or unstable as the target object moves, and motion planning must also be adaptive. In this work, we present a new approach, Eye-on-hAnd Reinforcement Learner (EARL), for enabling coupled Eye-on-Hand (EoH) robotic manipulation systems to perform real-time active pose tracking and dynamic grasping of novel objects without explicit motion prediction. EARL readily addresses many thorny issues in automated hand-eye coordination, including fast-tracking of 6D object pose from vision, learning control policy for a robotic arm to track a moving object while keeping the object in the camera’s field of view, and performing dynamic grasping. We demonstrate the effectiveness of our approach in extensive experiments validated on multiple commercial robotic arms in both simulations and complex real-world tasks.

Комментарии

Информация по комментариям в разработке