Marc, a 3D-printed
robot from the same lab, gives out a friendly handshake.
Marc will be used
for further study. Image: University of Lincoln
(October 15, 2015) Humans
are less likely to form successful working relationships with interactive
robots if they are programmed to be too perfect, new research reveals.
Interactive or ‘companion’ robots are increasingly used to support
caregivers for elderly people and for children with autism, Asperger syndrome
or attachment disorder, yet by programming their behaviour to become more
intelligent we could in fact be creating barriers to long-term human-robot
relationships, the research suggests.
Conducted by robotics experts from the University of
Lincoln, UK, the study found that a person is much more likely to warm to an
interactive robot if it shows human-like ‘cognitive biases’ - deviations in
judgement which form our individual characteristics and personalities, complete
with errors and imperfections.
The investigation was conducted by PhD researcher Mriganka
Biswas and overseen by Dr John Murray from the University of Lincoln’s School
of Computer Science. Their findings were presented at the International
Conference on Intelligent Robots and Systems (IROS) conference in Hamburg in
October 2015.
Mriganka said: “Our research explores how we can make a
robot’s interactive behaviour more familiar to humans, by introducing
imperfections such as judgemental mistakes, wrong assumptions, expressing
tiredness or boredom, or getting overexcited. By developing these cognitive
biases in the robots – and in turn making them as imperfect as humans – we have
shown that flaws in their ‘characters’ help humans to understand, relate to and
interact with the robots more easily.”