Researchers find a cheaper way to make driverless cars more aware

segnetI recently wrote about a project emerging from Imperial College London that aimed to help automated systems ‘see’ more effectively.

The team have developed some open source software, which they’re calling ElasticFusion, which gives robots a better understanding of their environment, and their place in it.

The ultimate aim is to allow robots to operate more safely in the home by mapping the environment and identifying elements within it.

Seeing in the outside world

Of course, that kind of contextual awareness is crucial for things such as driverless cars, which is a challenge that a team from the University of Cambridge are taking on.

Their work allows driverless cars to both identify their location in the absence of GPS, and also to identify other elements of the surrounding environment, all in real-time.  What’s more, the technology only requires a regular camera or smartphone to function, thus cutting huge sums from the cost of achieving such outcomes.

The work, which is freely available, is not currently advanced enough to control a driverless car, but it does help that machine visualize its environment.

Image processing

The system works by taking snapshots of the street and if it hasn’t seen the image before, it instantly classifies it and sorts the objects in it into one of 12 categories, including street signs, buildings and pedestrians.  It does all of this in real-time and is capable of factoring in changing light.

Thus far it’s capable of correctly labeling objects around 90% of the time, which is easily comparable with much more expensive and complex laser or radar based systems.

With the system being free to the public, the team encourage people to upload images of their own environment to help make the system smarter.

Applications for driverless cars

Currently, of course, most driverless cars use the kind of radar technology that is so expensive, so this more affordable solution could have strong implications for the industry.

The system, which learns by example, has already been ‘trained’ by the project team, who recruited undergraduates at Cambridge to label every pixel in over 5,000 images.

These images were then used to train the system before it was put through its paces in a live environment, both in built up roads and motorways.

“It’s remarkably good at recognising things in an image, because it’s had so much practice,” the team say. “However, there are a million knobs that we can turn to fine-tune the system so that it keeps getting better.”

It’s certainly an interesting approach, and a project to follow with interest.  You can learn more about SegNet in the video below.

Related

Facebooktwitterredditpinterestlinkedinmail

One thought on “Researchers find a cheaper way to make driverless cars more aware

Leave a Reply

Your email address will not be published. Required fields are marked *

Captcha loading...