acrobatics robot
News

AI Systems Can Learn Motion by Looking at YouTube Videos

Artificial intelligence is an area of research which offers tremendous potential. Even though AI is written by human coders, the technology is capable of learning on its own. A new effort by University of California researchers focuses on teaching AI systems to learn motion from YouTube clips. A very intriguing, though somewhat disturbing development.

Teaching Motion to AI Systems

There is always a need for ensuring artificial intelligence systems learn new things. Whether it is pure data or getting a feel for motion, the possibilities and opportunities are virtually limitless. Automating most of the teaching and learning process appears to be the next logical step in the evolution of AI as a whole. That may prove to be a lot easier said than done at first, although significant progress has been made recently.

Based on the recent developments outlined by the University of California Berkeley, automating AI learning can be done in many different ways. Researchers are currently exploring opportunities to teach AI about motion based on YouTube videos. The newly developed framework incorporates computer vision and reinforcement learning to train skills from a video. While it is initially linked to motion training, the concept can seemingly be expanded upon to encompass a lot of other traits and aspects as well.

So far, the researchers have successfully taught AI systems a set of over 20 acrobatic moves. This includes handsprings, backflips, and cartwheels, among other things. Since it does not require the use of motion capture video, this recent development is rather interesting. It can make a big impact on the way human action is converted to digital form in the future, such as methods used in the movie industry.

As one would come to expect, there is a lot more to this framework than meets the eye at first. When a YouTube video is queued up, the framework will try to determine which poses are being displayed. It then generates a simulated character which will mimic the movement through the embedded reinforcement learning. Additionally, the new framework can predict the potential outcome of a motion prior to even seeing it happen in the video.

Authors Jason Peng and Angjoo Kanazawa add:

“All in all, our framework is really just taking the most obvious approach that anyone can think of when tackling the problem of video imitation. The key is in decomposing the problem into more manageable components, picking the right methods for those components, and integrating them together effectively. However, imitating skills from videos is still an extremely challenging problem, and there are plenty of video clips that we are not yet able to reproduce: Nimble dance steps, such as this Gangnam style clip, can still be difficult to imitate.”

This new system would not be useful if the skills learned can’t be used for very different purposes. The researchers are confident their implementation can bring the skills learned to different characters, environments, and even train robots. Given how robots have evolved significantly over the past few years, it is evident this new research can play an increasing role of importance in the motion industry.

Image(s): Shutterstock.com

Leave a Comment

Your email address will not be published. Required fields are marked *

*