³Ô¹ÏÍøÕ¾

When robots imitate life: Project explores better way to train AI

One way that scientists train robots and artificial intelligence (AI) models to perform tasks – think self-driving cars – is by feeding them a perfect demonstration of what to do and asking them to copy it. This process, called imitation learning, is slow and expensive, and the resulting systems often can’t handle more complex real-world scenarios.

Instead, what if researchers could provide lots of imperfect demonstrations and have the system piece together a better approach? This strategy, called superhuman imitation learning, is the focus of co-led by, assistant professor of computer science in the Cornell Ann S. Bowers College of Computing and Information Science, along with Brian Ziebart and Xinhua Zhang of the University of Illinois at Chicago. They have received a nearly $1.2M grant from the ³Ô¹ÏÍøÕ¾ Science Foundation to support this work for three years.

Choudhury, who heads the People and Robot Teaching and Learning () group, will use this approach to train robots that assist people at home so robots can one day safely and efficiently perform tasks, like fetching a can of soup from the pantry and heating it up on the stove.

To test out this idea, Choudhury will have multiple users manipulate the robot to perform a series of tasks, like opening a drawer. Some will guide the robot well, but others will make mistakes. Then his group will develop an algorithm that, instead of blindly copying the demonstrations, tries to outperform them on a number of objectives – like not opening the drawer too slowly, or with too much force.

“We would like to see if the robot can still learn a behavior, even from these imperfect demonstrations, and do the task very, very well,” Choudhury said. He expects that, by learning from multiple teachers, the diverse training will make the robots more efficient and adaptable.

Read the entire story on the .

/Public Release. View in full .