NVIDIA researchers have demonstrated how robots can be trained to observe and repeat human actions — a “first of its kind” capability powered by deep learning.
Researchers Stan Birchfield and Jonathan Tremblay led a team to train a sequence of neural networks to perform duties associated with perception, program generation and program execution. As a result, the robot was able to learn a task from a single demonstration in the real world.
According to the research team’s paper, for robots to perform useful tasks in real-world settings, it must be easy to communicate the task to the robot. This includes both the desired result and any hints as to the best means to achieve that result. With demonstrations, a user can communicate a task to the robot and provide clues as to how to best perform the task.
They shared their result at the International Conference on Robotics and Automation (ICRA) held in Brisbane, Australia this week. Watch the video demonstration here.