Google Unveils 'Groundbreaking' New AI-Powered Robot


SHANGHAI, CHINA - JULY 08: An Ubtech Walker X Robot plays Chinese chess during 2021 World Artificial Intelligence Conference (WAIC) at Shanghai World Expo Center on July 8, 2021 in Shanghai, China.
(Photo by Yang Jianzheng/VCG via Getty Images)


On Friday, Google DeepMind unveiled Robotic Transformer 2 (RT-2), a groundbreaking vision-language-action (VLA) model aimed at creating general-purpose robots adept at navigating human environments.

RT-2 leverages a substantial language model, similar to the technology behind ChatGPT, trained on text and images sourced from the internet. This innovative approach empowers the robots with the capability of “generalization,” enabling them to perform tasks without explicit training.

“The aim is to establish robots that comprehend and act in our world as naturally as characters like WALL-E or C-3PO,” said a spokesperson for Google DeepMind. RT-2’s successful utilization of “generalization” allows robots to identify and dispose of trash, even amidst potential ambiguity like discarded food packaging or banana peels. It’s this understanding of typical behavior that guides its actions.

Moreover, the RT-2 model is significant for its inherent ability to adapt to changing scenarios in the real world – an ability unachievable through explicit programming. For instance, when instructed to “Pick up the extinct animal,” the RT-2 robot was able to discern and select a dinosaur figurine among various options.

RT-2 builds upon Google’s former AI projects, including the Pathways Language and Image model (PaLI-X) and the Pathways Language model Embodied (PaLM-E). Its data co-training also involves data from its predecessor model (RT-1), gathered over 17 months by 13 robots in an office kitchen environment. The result is a refined VLA model that processes robot camera images and predicts actions.

“To enhance robot control, we adopted a strategy of representing actions as tokens, similar to language tokens,” Google explained. This unique string representation of actions allows RT-2 to learn new skills using the same models applied to web data processing.

RT-2 further exhibits its advanced capabilities with chain-of-thought reasoning, enabling complex, multi-stage decision-making. For instance, it can choose an alternate tool or decide the best beverage for a tired individual.

In over 6,000 trials, RT-2 performed as effectively as RT-1 on known tasks. However, in unseen scenarios, RT-2 nearly doubled its predecessor’s performance, achieving a success rate of 62 percent.

Despite these advancements, Google concedes that RT-2 has limitations. While web data enhances the robot’s generalization capabilities, it can’t extend its physical abilities beyond what it learned from RT-1’s training data.


Trending stories, leading insights, & top analysis delivered directly to your inbox.

By submitting this form, you agree to receive email messages from The Liz Wheeler Show to the email address you provide. You may unsubscribe at any time.


Related Stories


Scroll to Top