
Google has introduced two groundbreaking artificial intelligence (AI) models—Gemini Robotics and Gemini Robotics-Embodied Reasoning (ER)—designed to enhance the capabilities of robots. These models, built on Google’s Gemini 2.0 foundation, enable robots to execute complex, multi-step tasks with greater precision and adaptability.
Gemini Robotics integrates vision-language-action (VLA) capabilities, allowing robots to process and respond to instructions more effectively, even in unfamiliar environments. Meanwhile, Gemini Robotics-ER strengthens spatial awareness and task planning, enabling robots to better perceive their surroundings, estimate conditions, and generate real-time execution code.
Read Also “Nigerian Innovator Stanley Anigbogu Wins 2025 Commonwealth Youth Award”
According to Google, these advancements will significantly impact the robotics industry, making AI-powered automation more efficient and adaptive. The models were trained on the ALOHA 2 bi-arm robotic platform but are also compatible with other robotic systems, including Franka arms and humanoid robots like Apollo, developed by Apptronik.
Google has also announced a strategic partnership with Apptronik to push the boundaries of humanoid robotics. Additionally, it is collaborating with Agile Robots, Agility Robotics, Boston Dynamics, and Enchanted Tools to further test Gemini Robotics-ER’s capabilities.
Read Also “Court Issues Contempt Notice Against Akpabio, Senate Officials Over Alleged Disobedience”
With these innovations, Google aims to reshape automation across multiple industries, fostering more seamless human-robot collaboration and driving new efficiencies in robotics.