They were working to develop a means of controlling large numbers of robots by taking inspiration from the way swarms behave in the animal kingdom.
The researchers have built an artificial pheromone system that they believe is both reliable and accurate. Their system uses an LCD screen and USB camera to simulate several pheromones in the form of visual trails on the screen.
A separate team, this time from the University of Sheffield, have developed a novel way of programming and controlling a swarm of 600 robots on the fly.
The method, which was documented in a recent paper, could be especially valuable in areas such as driverless technology, where safety is a key concern.
The method was borrowed from existing applications in manufacturing, and provides a nice breakthrough in understanding how large numbers of machines can work well together.
The work is an improvement on previous methods that largely use trial and error to automatically code the robots. This is not only time consuming to maintain, but can also result in undesirable behavior amongst the swarm.
By using a supervisory control theory, it reduces the the need for human input, and therefore reduces our fallibility. Tasks were assigned to the robots via a graphical interface, with the machine then translating this for the robots automatically.
The application uses a novel form of linguistics whereby the robots have their own alphabet that’s used to construct words that direct their actions.
The supervisory control theory dictates that the robots only function when valid words are created, which is an approach the researchers believe guarantees the robots conform to expectations.
It’s increasingly likely that machines will need to program themselves, so being able to do so in predictable ways is an important milestone for us to pass.
The team successfully tested out their approach in swarms of up to 600 robots, with the swarm able to automatically co-ordinate themselves to successfully achieve their outcome.
This would typically require them to gather together, manipulate objects and co-ordinate their actions in logical ways.
“Our research poses an interesting question about how to engineer technologies we can trust – are machines more reliable programmers than humans after all? We, as humans set the boundaries of what the robots can do so we can control their behaviour, but the programming can be done by the machine, which reduces human error,” the researchers say.
Of course, the question remains whether the swarms eradicate the kind of errors we are aware of ourselves, or the unintended consequences that have such potentially dire risks for us.
The team plan to further explore the issue however, and their next step is to develop ways that we can communicate and collaborate with the robot swarms, therefore learning from each other, which may reduce the potential risks involved in robots taking the logical, yet altogether wrong, path to achieve their goal.