Get it: https://category.yahboom.net/collecti...
Tutorials: https://www.yahboom.net/study/ROSMAST...
The newly developed ROSMASTER M1 by Yahboom is an omnidirectional mobile embodied intelligent robot designed specifically for robotics education, ROS research, and AI multimodal interaction experiments. It adopts Mecanum wheel chassis structure, it supports precise omnidirectional movement, enabling complex trajectory control such as lateral movement, diagonal movement, and in-situ rotation, providing an excellent experimental platform for path planning and motion control algorithm research.
ROSMASTER M1 can be equipped with various peripherals, including a 3D depth camera/2MP HD camera PTZ(optional), LiDAR, AI voice module, and ROS robot expansion board, building human-level 3D visual perception and environmental understanding capabilities. It supports Raspberry Pi 5, RDK X5, Jetson Nano 4GB, and Jetson Orin Nano 8G, and is fully compatible with ROS2 HUMBLE, deeply integrating with mainstream AI frameworks. Employing an innovative multimodal dual-model collaborative reasoning architecture, it efficiently integrates visual, voice, and text information, possessing human-like capabilities such as continuous dialogue, instant interruption, dynamic scene reasoning, and intentions speculation.
Whether conducting SLAM mapping and navigation, AI visual recognition, path planning research, or carrying out multimodal human-computer interaction experiments, this robot car can meet all your needs.
Mecanum omnidirectional drive chassis, High-torque 520 encoder motor
ROSMASTER M1 adopts high-performance four-wheel Mecanum wheel structure, enabling omnidirectional movements such as lateral movement, diagonal movement, circular rotation, and precise edge following. It perfectly meets the teaching and research needs of robot path planning, omnidirectional obstacle avoidance, and motion control algorithms, providing a powerful motion foundation for high-difficulty mobile robot experiments.
Multi-master platform compatibility, Meeting needs from beginner to research
Supports multiple computing platforms including Raspberry Pi 5, RDK X5, Jetson Nano 4GB, and Jetson Orin Nano 8G.Compatible with ROS2 HUMBLE, it adapts to multi-level needs such as school teaching, laboratory work, and AI research, providing users with extremely high scalability and sustainability.
Multi-sensor fusion, Building human-like 3D perception
Equipped with 3D depth camera, 2MP HD camera PTZ, LiDAR, AI voice module, and other peripherals, this system forms a multimodal environmental perception system, supporting advanced applications such as visual recognition, SLAM mapping, and environmental understanding.
Multimodal human-like Interaction with superior AI capabilities
Utilizing an innovative multimodal dual-model collaborative reasoning architecture, it deeply integrates visual, voice, and text information. It possesses cutting-edge human-like interaction capabilities, including continuous dialogue, real-time interruption, dynamic scene reasoning, and intent inference.
Highly scalable for research purposes, Meeting diverse experimental needs
Suitable for various advanced scenarios such as SLAM navigation, AI visual recognition, path planning, multimodal interaction, and embodied intelligence research. Supports a wealth of ROS teaching examples and open-source resources, making it easy to use in classroom teaching and research projects.
Информация по комментариям в разработке