How AI Helps An Avatar To Get Dressed

Putting on clothes is one of those tasks that is largely taken for granted, but for robots or other virtual devices, it’s a procedure that has baffled for many years.  A team from Georgia Institute of Technology believe they’ve managed to codify all of the various steps involved and are now capable of simulating getting dressed in computer animation.

The work, which was documented in a recently published paper, consists of a machine-learning driven process that can realistically simulate the various steps involved in getting dressed.  The work is important, as putting clothes on is more complex than we perhaps imagine, and involves a number of physical interactions between us and our clothes that are guided by our sense of touch.

“Dressing seems easy to many of us because we practice it every single day. In reality, the dynamics of cloth make it very challenging to learn how to dress from scratch,” the researchers explain. “We leverage simulation to teach a neural network to accomplish these complex tasks by breaking the task down into smaller pieces with well-defined goals, allowing the character to try the task thousands of times and providing reward or penalty signals when the character tries beneficial or detrimental changes to its policy.”

A gradual process

The researchers attempt to update the neural network that underpins the project iteratively to try and successfully teach the virtual avatar to dress successfully.

The avatar was able to successfully perform a range of dressing-related tasks, including putting on t-shirts and jackets.  The neural network allowed the avatar to perform fairly complex reenactments of the various ways in which people put on clothes.  The key to this was the way they replicated the tactile nature of the task, and by carefully selecting which elements of touch they implement into their model, they were able to create a character capable of successfully dressing itself under a range of conditions.

“We’ve opened the door to a new way of animating multi-step interaction tasks in complex environments using reinforcement learning,” the team explain. “There is still plenty of work to be done continuing down this path, allowing simulation to provide experience and practice for task training in a virtual world.”

The team, who also worked with researchers from Google Brain, are now exploring the possibilities of their work being used to help robots better understand the dynamics of dressing and potentially provide elderly people with assistance.  You can see the avatar in action via the video below.

Facebooktwitterredditpinterestlinkedinmail