Communication
system could make it easier to design systems that enable
humans and robots to
work together in emergency-response teams.
Image: Jose-Luis
Olivares/MIT
(February 17, 2016) System
could help prevent robots from overwhelming human teammates with information.
Autonomous robots performing a joint task send each other
continual updates: “I’ve passed through a door and am turning 90 degrees
right.” “After advancing 2 feet I’ve encountered a wall. I’m turning 90 degrees
right.” “After advancing 4 feet I’ve encountered a wall.” And so on.
Computers, of course, have no trouble filing this
information away until they need it. But such a barrage of data would drive a
human being crazy.
At the annual meeting of the Association for the Advancement
of Artificial Intelligence last weekend, researchers at MIT’s Computer Science
and Artificial Intelligence Laboratory (CSAIL) presented a new way of modeling
robot collaboration that reduces the need for communication by 60 percent. They
believe that their model could make it easier to design systems that enable
humans and robots to work together — in, for example, emergency-response teams.
“We haven’t implemented it yet in human-robot teams,” says
Julie Shah, an associate professor of aeronautics and astronautics and one of
the paper’s two authors. “But it’s very exciting, because you can imagine:
You’ve just reduced the number of communications by 60 percent, and presumably
those other communications weren’t really necessary toward the person achieving
their part of the task in that team.”
The work could have also have implications for multirobot
collaborations that don’t involve humans. Communication consumes some power,
which is always a consideration in battery-powered devices, but in some
circumstances, the cost of processing new information could be a much more
severe resource drain.
In a multiagent system — the computer science term for any
collaboration among autonomous agents, electronic or otherwise — each agent
must maintain a model of the current state of the world, as well as a model of
what each of the other agents takes to be the state of the world. These days,
agents are also expected to factor in the probabilities that their models are
accurate. On the basis of those probabilities, they have to decide whether or
not to modify their behaviors.