A new learning method developed by researchers at Carnegie Mellon University (CMU) enables robots to directly learn from human-interaction videos and generalize the information to new tasks, which helps them learn how to carry out household chores. The learning method is called WHIRL, which stands for In-the-wild Human Imitating Robot Learning, and it helps the robot observe the tasks and gather the video data to eventually learn how to complete the job itself.
The research was presented at the Robotics: Science and Systems conference in New York.
Imitation as a Way to Learn
Shikhar Bahl is a Ph.D. student at the Robotics Institute (RI) in Carnegie Mellon University’s School of Computer Science.
“Imitation is a great way to learn,” Bahl said. “Having robots actually learn from directly watching humans remains an unsolved problem in the field, but this work takes a significant step in enabling that ability.”
Bahl worked alongside Deepak Pathak and Abhinav Gupta, both of whom are also faculty members in the RI. The team added a camera and their software to an off-the-shelf robot that learned how to complete over 20 tasks. These tasks included everything from opening and closing appliances to taking a garbage bag out of the bin. Each time the robot watched a human complete the tasks before attempting it itself.
Pathak is an assistant professor in the RI.
“This work presents a way to bring robots into the home,” Pathak said. “Instead of waiting for robots to be programmed or trained to successfully complete different tasks before deploying them into people’s homes, this technology allows us to deploy the robots and have them learn how to complete tasks, all the while adapting to their environments and improving solely by watching.”
WHIRL vs. Current Methods
Most current methods for teaching a robot a task rely on imitation or reinforcement learning. With imitation learning, humans manually operate a robot and teach it how to complete a task, which requires being carried out multiple times before the robot learns. With reinforcement learning, the robot is usually trained on millions of examples in simulation before adapting the training to the real world.
While both of these models are efficient at teaching a robot a single task in a structured environment, they prove difficult to scale and deploy. But with WHIRL, a robot can learn from any video of a human completing a task. It is also easily scalable, not confined to one specific task, and can operate in home environments.
WHIRL enables robots to accomplish tasks in their natural environments. And while the first few attempts usually ended in failure, it could learn very quickly after just a few successes. The robot doesn’t always accomplish the task with the same movements as a human, but that’s because it has different parts that move differently. With that said, the end result of accomplishing the tasks is always the same.
“To scale robotics in the wild, the data must be reliable and stable, and the robots should become better in their environment by practicing on their own,” Pathak said.
Credit: Source link