Using the crowd to make robotic conversations more natural

teaching-robotDespite AI based personal assistants getting progressively better in recent years, there is still much work to be done before machines are capable of having reasonable conversations with us.

So a recent study from a team from Disney Research is interesting, as they utilize crowdsourcing to help robots improve their conversations.

Central to the project was something known as a persistent interactive personality (PIP).  This is a construct used to translate high level goals into some simple narratives that can be used to summarize a situation.

These summaries were then presented to workers recruited via the crowd to produce the appropriate speech for such a situation in a single line of dialogue (or in non-verbal dialogue if more appropriate).  These lines were then evaluated by a second pool of crowd workers to finally arrive at an optimum pool of responses.

A richer narrative

The team believe that this kind of approach can rapidly expand the range of expressions that can be meaningfully deployed by machines, whilst also providing a scalable way of expanding and updating their dialogue.

The recruited volunteers were asked to judge whether a response made sense, and then to score it for overall quality.  They were also asked to highlight whether particular words needed emphasizing or whether expressions should be tinged with a particular emotion.

It’s something that’s known as semi-situated learning.  This refers to the fact that whilst responses may be context-specific, the machine may have far more information than it chooses to express in the dialogue.

The experiment saw the dialogue generated by the crowd fed to a robot quizmaster, who was positioned in both an office and at a couple of public events.  The players in each environment were played identical dialogues each time they played, but the office workers were more likely to have their dialogue repeated as they were instructed to play the game more often.  However, despite this fact, the robot quizmaster was smart enough to vary it’s dialogue sufficiently that none of the office players heard the same dialogue twice, despite playing the quiz over 30 times.

“We didn’t expect people to like the quiz game quite so much,” the authors say. “PIP can notice if language with a particular user is getting repetitive; if we had let PIP use this feature, it would have updated its own dialogue model in response.”

The next stage is to expand things so that the PIP is used to work in a much broader range of areas than simply quiz games.

Related

Facebooktwitterredditpinterestlinkedinmail

Leave a Reply

Your email address will not be published. Required fields are marked *

Captcha loading...