Using VR To Control Robots

As man and machine increasingly work alongside one another, there has been a greater emphasis placed on their safe collaboration.  A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that the best way might be to place the humans out of harms way entirely.

In a recently published paper, they propose using virtual reality to operate the robot remotely.  The worker is placed inside a VR control room with an array of sensor displays transporting them inside the robot’s head, from which they can control its actions.

“A system like this could eventually help humans supervise robots from a distance,” the researchers say. “By teleoperating robots from home, blue-collar workers would be able to tele-commute and benefit from the IT revolution just as white-collars workers do now.”

Virtual control

The team put their method through its paces using a Baxter humanoid robot, but they are confident it can work just as well with various other robot platforms on the market (just as similarly you don’t need an Occulus headset to use it either).

There have historically been two approaches used when deploying VR for teleoperation of robots.  The first of these is a direct approach whereby the user’s vision is linked to the robot’s state.  Whilst these systems can be effective, any delay in the signal could rapidly lead to nausea or headaches in the operator.

The second main approach is the cyber-physical one.  In this, the user is separated from the robot and interacts with a virtual copy of it.  It’s an approach that requires an awful lot of data, and dedicated space to operate.

The MIT team utilize a system that sits somewhere between the two.  It overcomes the delay issue by providing the user with constant visual feedback, whilst the issue of being distinct from the robot is overcome by making the operator feel as though they are inside the head of the robot.

It uses the homunculus model of mind, which is the notion that there’s a tiny human inside our brains controlling what we do.  It’s a pretty wacky concept for humans, but it does kind of make sense when thinking about robots, as the human in the control room is capable of seeing through the eyes of the robot, and then controlling its actions accordingly.

The operator is able to interact with any controls that appear in the virtual space, and therefore fully operate the robotic hand grippers to move items about in the physical domain.

The process for doing this is relatively straightforward, with the operator’s space mapped into the virtual domain, which is then also mapped into the robot’s physical space.  This provides a sense of co-location.

Flexibility

The team believe that their system affords significantly more flexibility than existing systems, many of which require substantially more resources to do the same work.  For instance, they might use 2D information from a range of cameras to build their 3D model of the environment, which is then processed and displayed.

The MIT approach skips that whole process by taking the 2D images that are shown to each eye, with the brain of the operator then inferring the correct 3D information automatically.

The system was put through its paces in a series of tests that saw the robot picking up simple items such as screws, and stapling wires.

When compared with current, state-of-the-art systems, the MIT team were better at grasping objects 95% of the time, and roughly 57% faster at completing the tasks.  Interestingly, operators tended to perform much better when they came from a gaming background.  The research also highlighted how the robots could be piloted from hundreds of miles away.

It’s a major leap in what can be achieved with robots, and the team hope to make their system significantly more scalable so that multiple users with different types of robots can work effectively together, and especially alongside existing automated technology.

Related

Facebooktwitterredditpinterestlinkedinmail