RT-2: DeepMind robotic research based on PaLM visual language models
RT-2: DeepMind robotic research based on PaLM visual language models
www.deepmind.com RT-2: New model translates vision and language into action
Introducing Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control, while retaining web-scale capabilities. This work builds upon Robotic Transformer 1 (RT-1...
Up to 100% improvement on unseen tasks, environments, and backgrounds
0
comments