Ameca RoboCopyCat's Mishaps: Learning Moments in Humanoid Emah Gesture Mimicry

Описание к видео Ameca RoboCopyCat's Mishaps: Learning Moments in Humanoid Emah Gesture Mimicry

Since 2022, our team has developed a real-time Human-Robot Interaction (HRI) system (abbv. Emah), which is implemented on Generation 1 Ameca robot from Engineered Arts as the front-end. The Emah system utilizes rule-based behavior models, custom machine learning and deep learning architectures which provides a realistic communication between human and robot. Our system integrates two external sensors - a microphone and a ZED2 camera - for enhanced dialogue processing and vision capabilities. For human speech detection, Google's Speech-to-Text service is employed, while Stereolabs' ZED driver aids in perceiving humans and the environment. Ameca is instrumental in delivering robot speech with lipsync and expressing emotions, both generated by our team.

Ameca's design, notable for its realistic facial expressions, hand gestures, and head poses, enhances engagement in user studies, making interactions feel more natural. Ameca's default system can track multiple faces simultaneously through a high-definition chest camera and binocular eye cameras, ensuring eye tracking and eye contact. Distinguished from SOTA robots by its ability to move its neck and clavicle, Ameca features a silicon-based blue face mask, which really stands out for its lifelike appearance.

Our research with the Emah system, including studies on social perception, scene awareness, and observational learning with robot tutees, has contributed significantly to the field of HRI. Future research aims to further integrate Emah system as a student companion at our university, leveraging its adult-like appearance and customized interaction capabilities.

Комментарии

Информация по комментариям в разработке