Two new studies introduce AI systems that use either video or photos to create simulations that can train robots to function in the real world. This could significantly lower the costs of training robots to function in complex settings. Here, the URDFormer system transforms an internet photo of a kitchen into a functioning kitchen simulation of the kitchen.Chen et al./RSS 2024
Researchers working on large artificial intelligence models like ChatGPT have vast swaths of internet text, photos and videos to train systems. But roboticists training physical machines face barriers: Robot data is expensive, and because there aren’t fleets of robots roaming the world at large, there simply isn’t enough data easily available to make them perform well in dynamic environments, .
Some researchers have turned to simulations to train robots. Yet even that process, which often involves a graphic designer or engineer, is laborious and costly.
Two new studies University of Washington researchers introduce AI systems that use either video or photos to create simulations that can train robots to function in real settings. This could significantly lower the costs of training robots to function in complex settings.
In the first study, a user quickly scans a space with a smartphone to record its geometry. The system, called RialTo, can then create a “digital twin” simulation of the space, where the user can enter how different things function (opening a drawer, for instance). A robot can then virtually repeat motions in the simulation with slight variations to learn to do them effectively. In the second study, the team built a system called URDFormer, which takes images of real environments from the internet and quickly creates physically realistic simulation environments where robots can train.
The teams presented their studies – the and the – at the Robotics Science and Systems conference in Delft, Netherlands.
“We’re trying to enable systems that cheaply go from the real world to simulation,” said , a UW assistant professor in the Paul G. Allen School of Computer Science & Engineering and co-senior author on both papers. “The systems can then train robots in those simulation scenes, so the robot can function more effectively in a physical space. That’s useful for safety – you can’t have poorly trained robots breaking things and hurting people – and it potentially widens access. If you can get a robot to work in your house just by scanning it with your phone, that democratizes the technology.”
While many robots are currently well suited to working in environments like assembly lines, teaching them to interact with people and in less structured environments remains a challenge.
“In a factory, for example, there’s a ton of repetition,” said lead author of the URDFormer study , a UW doctoral student in the Allen School. “The tasks might be hard to do, but once you program a robot, it can keep doing the task over and over and over. Whereas homes are unique and constantly changing. There’s a diversity of objects, of tasks, of floorplans and of people moving through them. This is where AI becomes really useful to roboticists.”
The two systems approach these challenges in different ways.
RialTo – which Gupta created with a team at the Massachusetts Institute of Technology – has someone pass through an environment and take video of its geometry and moving parts. For instance, in a kitchen, they’ll open cabinets and the toaster and the fridge. The system then uses existing AI models – and a human does some quick work through a graphic user interface to show how things move – to create a simulated version of the kitchen shown in the video. A virtual robot trains itself through trial and error in the simulated environment by repeatedly attempting tasks such as opening that toaster oven – a method called .
By going through this process in the simulation, the robot improves at that task and works around disturbances or changes in the environment, such as a mug placed beside the toaster. The robot can then transfer that learning to the physical environment, where it’s nearly as accurate as a robot trained in the real kitchen.
The other system, URDFormer, is focused less on relatively high accuracy in a single kitchen; instead, it quickly and cheaply conjures hundreds of generic kitchen simulations. URDFormer scans images from the internet and pairs them with existing models of how, for instance, those kitchen drawers and cabinets will likely move. It then predicts a simulation from the initial real-world image, allowing researchers to quickly and inexpensively train robots in a huge range of environments. The trade-off is that these simulations are significantly less accurate than those that RialTo generates.