Normally when I talk about collaboration in the workplace, it is very much with humans in mind. With technologies such as drones and driverless cars getting closer to market however, there is an increasing need for robots to get better at adapting to complex situations.
To a large extent, that means working more effectively together. This is something the MIT Computer Science and AI Laboratory (CSAIL) have been working on for a few years, and they recently presented a paper of their latest thoughts on the topic.
Robot collaboration
Central to the paper is a system whereby three robots can work successfully together to ensure items are delivered accurately in an unpredictable environment.
To put their robots through their paces, the team created a miniature bar type environment, complete with one ‘bartender’ robot and two ‘waitress’ robots.
The waitress robots would be required to take orders from customers, and also be aware if their ‘colleague’ had delivered that order to the customer whilst also efficiently navigating the bar.
The robots were capable of communicating with one another using a complex algorithm that allowed them to plan the best route and collaborate effectively having been given minimal information at the beginning.
Dealing with uncertainty
The biggest challenge to overcome for the project was how to ensure the robots dealt successfully with uncertainty. This manifested itself in three main ways: sensors, communications and outcomes.
“Each robot’s sensors get less-than-perfect information about the location and status of both themselves and the things around them,” the team say. “As for outcomes, a robot may drop items when trying to pick them up or take longer than expected to navigate. And, on top of that, robots often are not able to communicate with one another, either because of communication noise or because they are out of range.”
For instance, there were challenges around communication between the robots, with the bar robot only able to talk to one waiter robot at a time. They were also limited by distance as to who they could communicate to. Both of these limitations would present distinct challenges in the field.
“These limitations mean that the robots don’t know what the other robots are doing or what the other orders are,” the team say. “It forced us to work on more complex planning algorithms that allow the robots to engage in higher-level reasoning about their location, status, and behavior.”
Key to overcoming some of these challenges was to design the robot such that it viewed tasks much as we do. We don’t tend to think of each and every step we take as they are generally second nature to us.
So the team developed the robots to perform a series of macro actions that each included multiple smaller steps.
For instance, if the waiter robot was moving from the room to the bar, it should be prepared to face a range of possible situations, whether that’s the barman serving another robot or not being ready to serve the robot.
“You’d like to be able to just tell one robot to go to the first room and one to get the beverage without having to walk them through every move in the process,” the team says. “This method folds in that level of flexibility.”
The next task is to apply these methods to larger, more complex domains to put them further to the test. The team are planning a simulated search and rescue challenge, and also a damage assessment task for the International Space Station.
As a result, smarter and more collaborative robots may be hitting the market pretty soon. Check out the video below for more information on the project.
That is impressive, and I can certainly see how this would become invaluable for things like driverless cars.
Interesting piece on this here about robots being 'taught', but this time at Berkeley
http://www.bloomberg.com/features/2015-preschool-…
Fascinating stuff, great share Paul.
A sign of this in action?
https://www.youtube.com/watch?v=tGY0TxyGp0g