Do you think a robot should be allowed to lie? A new study published in investigates what people think of robots that deceive their users.
Their research uses examples of robots lying to people to find out if some lies are acceptable – and how people might justify them.
Social norms say , if it protects someone from harm. Should a robot be allowed the same privilege to lie for the greater good? The answer, according to this study, is yes – in some cases.
Three types of lies
This is important, because robots are no longer reserved for science fiction. Robots are already part of our daily lives. You can find them vacuum-cleaning your floors at home, serving you at restaurants, or giving your elderly family member companionship. In factories, robots are helping workers assemble cars.
Several companies, like and , are even developing robots that may soon be able to do more than just vacuuming. They could do your house chores or play your favourite song if you look sad.
The new study, led by cognition researcher from George Mason University in the United States, looked at three ways robots might lie to people:
Type 1: The robot could lie about something other than itself.
Type 2: The robot could hide the fact it is able to do something.
Type 3: The robot could pretend it is able to do something even though it is not.
The researchers wrote brief scenarios based on each of those deceptive behaviours, and presented the stories to 498 people in an online survey.
Respondents were asked if the robot’s behaviour was deceptive, and whether or not they thought the behaviour was okay. The researchers also asked respondents if they thought the robot’s behaviour could be justified.
What did the survey find?
While all types of lies were recognised as deceptive, respondents still approved of some types of lies and disapproved of others. On average, people approved of type 1 lies, but not type 2 and type 3.
Just over half of respondents (58%) thought a robot lying about something other than itself (type 1) is justified if it spares someone’s feelings or prevents harm.
This was the case in one of the stories involving a medical assistant robot that would lie to an elderly woman with Alzheimer’s about her husband still being alive. “The robot was sparing the woman [from] painful emotions,” said one respondent.
On average, respondents didn’t approve of the other two types of lies, though. Here, the scenarios involved a housekeeping robot in an Airbnb rental and a factory robot co-worker.
In the rental scenario, the housekeeping robot hides the fact it records videos while doing chores around the house. Only 23.6% of respondents justified the video recordings by arguing it could keep the house visitors safe or monitor the quality of the robot’s work.
In the factory scenario, the robot complains about the work by saying things like “I’ll be feeling really sore tomorrow”. This gives the human workers the impression the robot can feel pain. Only 27.1% of respondents thought it was okay for the robot to lie, saying it’s a way to connect with the human workers.
“It’s not harming anyone; it’s just trying to be more relatable,” said one respondent.
Surprisingly, the respondents sometimes highlighted that someone else besides the robot was responsible for the lie. For the house cleaning robot hiding its video recording functionality, 80.1% of respondents also blamed the house owner or the programmer of the robot.
Early days for lying robots
If a robot is lying to someone, there could be an acceptable reason for it. There are lots of philosophical debates in research about the ways robots should fit in with society’s social norms. For example, these debates ask whether it is ethically wrong for robots to or if there could be .
This study is the first to ask people directly what they think about robots telling different types of lies.
Previous studies have shown if we find out robots are lying, .
Perhaps, though, robot lies are not that straightforward. It depends on whether or not we believe the lie is justified.
The questions then are: who decides what justifies a lie or not? Whom are we protecting when we decide whether or not a robot should be allowed to lie? It might simply not be okay, ever, for a robot to lie.