The past year has seen a big jump in artificial approaches to learning. For instance, the Cornell led Tell Me Dave project use human volunteers to ‘train’ the machine using a game interface.
A team from Georgia Tech, by contrast, used a traditional expert systems style approach to train IBM’s Watson to get better at answering specific queries.
Researchers at Maryland University meanwhile trained their robots by showing them YouTube videos of humans performing certain tasks that the machines were subsequently able to pick up.
Meanwhile a team from the University of New York are looking to explore how humans learn, and whether the approach can be utilized when building learning robots.
Raising expectations
Whilst these developments are undoubtedly impressive, there is a sense that they represent some of the low hanging fruit in the pursuit of artificial intelligence.
After all, many tasks simply require large databases and detailed rules to make sense of that data. Understanding and interpreting things like stories is much more complex.
A recent project led by researchers at the Karlsruhe Institute of Technology in Germany aims to construct a database that is capable of making sense of stories.
It does this by being able to successfully answer questions posed it about the story.
An understanding of narrative
To achieve this, the team are trawling through Wikipedia to create plot synopses of around 300 movies. These synopses are then linked with the movies themselves, each and every frame of them.
Of course, this information provides a degree of descriptive information, but doesn’t do so well at providing a contextual perspective of why something happened.
They attempted to fill in this gap by trawling databases such as the described video text supplied to blind people that aims to give ‘viewers’ this context.
Last, but not least, a team of human annotators read over these synopses and formulated questions about the paragraphs they read, together with an answer to that question, and the piece of text that helped them to answer it.
This database of 7,000 questions was then used to train the algorithms. Suffice to say, the questions are all ones that a human would find pretty easy to answer (providing they’ve seen the film!), but for a robot, they remain a challenge.
The first iterations are a pretty resounding failure, but the team hope that over time the machine will become smarter at reading the stories, and their subtexts.
It will also provide the industry with a good insight into how large the training database needs to be for a relatively challenging task such as this.
The database they’re working on will be available for free online, so check it out and see what you think.
good luck u will never be able to make sense of peoples storieals because everyones perceptional exleriances are completely different giving each common word a different definition from the next guys.