Teaching robots to cope with ambiguity

Our attitude towards robots and automated systems seems to fall into two buckets.  Either they’ll be unthinking machines that do our every bidding, or they’ll be runaway intelligence that rapidly decide that they can do without humans.  As with so much, the answer probably lies somewhere in between, and whilst most of the time it’s only right for the machine to do as they’re told, there will be instances where they will need to think for themselves and disobey.

For instance, nursing robots may be ordered to give their patient some medication, but the machine knows that the medication had already been given that day and an overdose could be harmful.

There are many instances such as that where robotic disobedience would be helpful, but programming such interactions is not easy.  Nevertheless, a team from the University of Tufts are looking to do just that.

Subtle cues

The researchers are developing robotic systems capable of making simple inferences from basic human commands.  These help the system better understand whether to carry out the task instructed of them, or to refuse because they violate more basic principles.

As you might imagine, this is no small task, as the number of possible outcomes from any action are usually vast.  For instance, if a robot throws a ball out of the door, it might land harmlessly in the garden.  Alternatively, it might roll into the road, with a child running after it into the path of a vehicle.

So there may be perfectly acceptable times and places to throw the ball, and other times and places where it’s certainly not.  Additional nuance is added to the situation if a human is attempting to trick the machine into doing its bidding.

So how would a robot go about understanding this context?  An awful lot of background knowledge is a good start, both in terms of the potential outcomes of an action, and also the intentions of the humans they’re interacting with.  This is further complicated when robots function in areas that are governed by external rules, whether laws, regulations or even social norms.

We’re some way from achieving this at the moment, and this will limit the range of contexts a robot can operate in, but researchers are certainly doing a considerable amount of work, and progress is being made.  Check out the video below to see more from the team at Tufts.

Related

Facebooktwitterredditpinterestlinkedinmail

Leave a Reply

Your email address will not be published. Required fields are marked *

Captcha loading...