December 21, 2015

Teaching machines to see: new smartphone-based system could accelerate development of driverless cars



SegNet demonstration, Credit: Alex Kendall

(December 21, 2015)  Two technologies which use deep learning techniques to help machines to see and recognise their location and surroundings could be used for the development of driverless cars and autonomous robotics – and can be used on a regular camera or smartphone. 

Two newly-developed systems for driverless cars can identify a user’s location and orientation in places where GPS does not function, and identify the various components of a road scene in real time on a regular camera or smartphone, performing the same job as sensors costing tens of thousands of pounds.

The separate but complementary systems have been designed by researchers from the University of Cambridge and demonstrations are freely available online. Although the systems cannot currently control a driverless car, the ability to make a machine ‘see’ and accurately identify where it is and what it’s looking at is a vital part of developing autonomous vehicles and robotics.

The first system, called SegNet, can take an image of a street scene it hasn’t seen before and classify it, sorting objects into 12 different categories – such as roads, street signs, pedestrians, buildings and cyclists – in real time. It can deal with light, shadow and night-time environments, and currently labels more than 90% of pixels correctly. Previous systems using expensive laser or radar based sensors have not been able to reach this level of accuracy while operating in real time.

read entire press  release >>