Robots’ maps of their environments can make existing
object-recognition algorithms more accurate.
(July 24, 2015) John Leonard’s group in the MIT Department
of Mechanical Engineering specializes in SLAM, or simultaneous localization and
mapping, the technique whereby mobile autonomous robots map their environments
and determine their locations.
Last week, at the Robotics Science and Systems conference,
members of Leonard’s group presented a new paper demonstrating how SLAM can be
used to improve object-recognition systems, which will be a vital component of
future robots that have to manipulate the objects around them in arbitrary
ways.
The system uses SLAM information to augment existing
object-recognition algorithms. Its performance should thus continue to improve
as computer-vision researchers develop better recognition software, and
roboticists develop better SLAM software.
“Considering object recognition as a black box, and
considering SLAM as a black box, how do you integrate them in a nice manner?”
asks Sudeep Pillai, a graduate student in computer science and engineering and
first author on the new paper. “How do you incorporate probabilities from each
viewpoint over time? That’s really what we wanted to achieve.”
Despite working with existing SLAM and object-recognition
algorithms, however, and despite using only the output of an ordinary video
camera, the system’s performance is already comparable to that of
special-purpose robotic object-recognition systems that factor in depth
measurements as well as visual information.