A new method teaches robots “vision-based pursuit”. In short, robots can now chase humans.
Researchers at UC Berkeley have developed a new way to teach robots strategic decision-making for dynamic tasks like playing tag. Rather than simply following a person or another robot, the robot cuts them off and actively searches for them.
Learning such behaviors in the real world is extremely difficult for a robot because it has limited knowledge of its environment and other agents through its sensors, the goals of other agents are unclear, and movement in the physical world is fundamentally more difficult than in simulations.
Therefore, direct learning of such behaviors, for example through reinforcement learning, has failed due to these requirements.
Dog robot learns from omniscient AI teacher
So the team is using a different approach called “privileged learning”. This is a form of supervised learning in which a teacher who has additional information helps a student who does not.
In the case of robots, this means that the robot teacher uses the future trajectory of the evader to infer the evader’s intentions. Equipped with this privileged information, the robot teacher can guide the student robot step by step on which actions to take. The inherently complex planning problem thus becomes a simple supervised learning problem for the student.
Despite the simplicity of the method, the robot learns dynamic behaviors, such as reducing its speed when the evader turns, or intercepting it by predicting where it will be.
The researchers tested their approach on a real four-legged robot that played tag with humans and other robots, relying solely on built-in cameras and proprioception.
The real robot also exhibited the complex behaviors that the underlying model had learned in the simulation.
So far, the system can’t handle obstacles – for that, it needs more extensive AI training and better sensors, the researchers said.
More information is available on the project page.