A good deal of time and effort has been expended in the past year to develop better learning capabilities in robots and other automated systems. Whilst these attempts have a common goal, their method has varied considerably.
A team from Georgia Tech, by contrast, used a traditional expert systems style approach to train IBM’s Watson to get better at answering specific queries.
Researchers at Maryland University meanwhile trained their robots by showing them YouTube videos of humans performing certain tasks that the machines were subsequently able to pick up.
Inspired by humans
A team from New York University are taking their own inspiration from the way human beings learn. The team believe that it enables the AI to pick up new knowledge in a more efficient way.
What makes the approach interesting is that it’s capable of picking things up after a single exposure rather than requiring multiple examples to pick up the concept.
Being able to efficiently pick up things has clear benefits from a data processing perspective. The researchers suggest that the current approach is largely due to the difficulties algorithms have in thinking as we do, despite attempts to build neural networks to mimic the brain.
Taking a Bayesian approach
The researchers instead use a Bayesian program learning framework whereby the software generates a unique program for each character using an imaginary pen. A probabilistic approach is then used to match up this program with a character.
It is mimicking the approach adults often take when writing things down, rather than the more obvious option of mimicking how children learn to do so.
“The key thing about probabilistic programming—and rather different from the way most of the deep-learning stuff is working—is that it starts with a program that describes the causal processes in the world,” the researchers say. “What we’re trying to learn is not a signature of features, or a pattern of features. We’re trying to learn a program that generates those characters.”
The method was tested out by matching up the program with a human partner to see if the human could tell the difference between what the robot produced and a humanly written alternative.
The results revealed that just 25% of human judges were able to do so. The next stage is to ramp up the process to test it in more situations.
It’s certainly an impressive paper, and could have a big impact on machine learning in the coming years. The question with most attempts to mimic human thinking is whether that is really the most effective way of learning or whether machines could actually come up with a more optimal way than we have thus far managed.
Check out the video below to hear the authors talking about their work in more detail, and let me know your thoughts in the comments below.