Using stories to teach robots right from wrong

robot-readingThere has been a sense that as the capabilities of artificial intelligence has expanded at a rapid pace in the past few years that we need to step back and think of the philosophical and ethical side of AI.

This is especially so when we have such a patchy understanding of how seemingly straightforward goals might be carried out by an AI.  For instance, requesting that an AI eradicate cancer could prompt it to kill all humans, thus achieving its ultimate goal but probably not in the way we’d desire.

Researchers from the Georgia Institute of Technology believe that robots can learn sufficient ethics, even if it’s not hardwired into them by using an approach they’re calling Quixote.

The approach, which was documented in a recent paper, uses value alignment, with the robots trained using stories to understand right from wrong.

“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature,” the authors say. “We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”

Morality via stories

The approach used by Quixote is designed to align the goals of the AI with human values by placing certain rewards for certain behaviors.  It’s built on previous work by the researchers that highlighted how AI can infer the appropriate actions from various crowdsourced story plots harvested from the web.

The system learns the correct behavior and then passes this basic data structure on to Quixote, which then converts the signal into a reward that is designed to reinforce certain behaviors (and punish others).

So, for instance, if the robot is asked to pick up a prescription, the system is given options such as robbing the chemist, waiting in line or interacting politely with the staff.

If no value alignment took place, the AI might determine that the best way of achieving its goal would be to rob the chemist, but when values are programmed into it, it is more likely to wait in line and pay for the prescription.

Thinking about thought

The researchers put the system through its paces and believe it has made crucial progress in uncovering the various steps possible for a particular scenario.

They have developed a plot trajectory tree, which is then used by the AI to make choices in much the same way as readers do in a choose your own adventure novel.

At the moment, the method is effective for robots that have a relatively limited purpose, but are nonetheless required to interact with human beings to achieve their goal.  The team believe it is an important step towards giving machines a degree of moral reasoning however.

“We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior,” they say. “Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual.”

Related

Facebooktwitterredditpinterestlinkedinmail

5 thoughts on “Using stories to teach robots right from wrong

  1. You see where this could go very wrong, correct? I mean, of course you DO see that, and you are very concerned, as you recognize that without adequate training, the robots will soon be much smarter than any human could be, at least for some things. And you realize that efforts to plug the hole, so to speak, are useless, because a tremendous number of people will ignore (or worse) that safeguard, both here (NSA or CIA anyone?) or certainly in foreign governments of every stripe and color. Are you starting to realize that there is nothing we can do, and it's very very very scary, or will you continue to ignore it anyway, on the supposition that somebody else will find a way? No — this time it's the opposite, and you're too stupid to figure that out.

Leave a Reply

Your email address will not be published. Required fields are marked *

Captcha loading...