The last few years have seen a number of novel ways for machines to learn new things. This traditionally involves feeding the algorithm a whole lot of data that then allows the machine to learn right from wrong.
The folks at DeepMind however have been more inclined to use play as a mechanism to help their AI learn as it encourages greater adaptability. It famously programmed its machines to learn and then master a bunch of retro video games. Now, scientists from the company have published a paper describing how they use the same approach to help a machine physically learn.
Learning through experimentation
The paper describes work undertaken by the British company to allow an AI to learn the physical properties of various objects by interacting with them in the same way a child does with a toy.
For instance, one experiment saw the AI playing with a bunch of blocks of differing masses. Whenever the AI correctly identified the heaviest object, it received a prize, with feedback given whenever it incorrectly identified the correct object. After a few iterations of this game, the algorithm quickly learned that the best way of scoring highly was to play with each block before making a choice as to the heaviest one.
A second experiment then placed the blocks on top of each other in a tower shape. Some blocks were glued together, with others more free form. The algorithm was tasked with identifying how many blocks there were, with similar feedback as before for correct or incorrect answers. As in the first experiment, the AI quickly learned that the best solution was to begin playing with the blocks to better understand their properties.
Now, it should be said, the experiment was conducted in a purely virtual environment, and being able to identify and then manipulate physical objects is somewhat harder, but a number of teams are doing just that, especially when developing solutions for warehouses and other environments where object manipulation is central to the functioning of the AI.
Understanding the world
For instance, the Virtual Genome is a project that aims to provide a hub for understanding how machines are able (or not) to understand the world they operate in.
The platform was developed by professors at the Stanford Artificial Intelligence Lab and aims to tackle some of the toughest questions in computer vision, with the eventual goal of developing machines that can understand what it sees.
Researchers at Boston University are conducting work on the same topic, and have developed a robot that is capable of recognizing specific objects, and then maneuver around them without human support.
The ability for robots to guide and navigate for themselves is hugely important and feeds into a vast range of possible applications. The Boston project utilized a deep neural network that was capable of processing huge amounts of data in order to recognize the simple objects.
“There’s an algorithm that will take a ton of pictures of one object and will put it in and compile it all,” they say. “Then we basically assign a number to it.” The robot “will come upon an object and it will say, ‘Oh, there’s an object in front of me, let me think about it.’ It will…find a picture that corresponds with the object, pick that number, and then it will be able to use that as a reference, so it can exclaim, ‘Oh, it’s a ball,’ ‘It’s a cone,’ or whatever object I had decided to teach it.”
The DeepMind work takes this another step further and will be a crucial step in the process by which machines learn to better understand their surroundings, with significant implications for the effectiveness of a whole range of industrial robotics.