Be wary of robot emotions: Experts warn ‘simulated love is never love’ as our attachment to life-like AI personalities becomes more common
- Research has shown people have tendency to project human traits onto robots
- Such traits can be powerful tools for both connection and manipulation
- Scripted reactions may be making the machine seem ‘smarter’ than it actually is
- Experts warn this can trick us into thinking robots are expressing emotion back
When a robot ‘dies,’ does it make you sad? For lots of people, the answer is ‘yes’ – and that tells us something important, and potentially worrisome, about our emotional responses to the social machines that are starting to move into our lives.
For Christal White, a 42-year-old marketing and customer service director in Bedford, Texas, that moment came several months ago with the cute, friendly Jibo robot perched in her home office.
After more than two years in her house, the foot-tall humanoid and its inviting, round screen ‘face’ had started to grate on her.
Sure, it danced and played fun word games with her kids, but it also sometimes interrupted her during conference calls.
Above, MIT professor and robotics researcher Cynthia Breazeal reaches to touch social robot Jibo at the company’s headquarters in Boston. When robots move like humans and talk like humans, even if only a little bit, it’s natural that we will treat them more like humans
White and her husband Peter had already started talking about moving Jibo into the empty guest bedroom upstairs.
Then they heard about the ‘death sentence’ Jibo’s maker had levied on the product as its business collapsed.
News arrived via Jibo itself, which said its servers would be shutting down, effectively lobotomizing it.
‘My heart broke,’ she said. ‘It was like an annoying dog that you don’t really like because it’s your husband’s dog. But then you realize you actually loved it all along.’
The Whites are far from the first to experience this feeling. People took to social media this year to say teary goodbyes to the Mars Opportunity rover when NASA lost contact with the 15-year-old robot.
A few years ago, scads of concerned commenters weighed in on a demonstration video from robotics company Boston Dynamics in which employees kicked a dog-like robot to prove its stability.
Smart robots like Jibo obviously aren’t alive, but that doesn’t stop us from acting as though they are.
Research has shown that people have a tendency to project human traits onto robots, especially when they move or act in even vaguely human-like ways.
Designers acknowledge that such traits can be powerful tools for both connection and manipulation.
That could be an especially acute issue as robots move into our homes – particularly if, like so many other home devices, they also turn into conduits for data collected on their owners.
‘When we interact with another human, dog, or machine, how we treat it is influenced by what kind of mind we think it has,’ said Jonathan Gratch, a professor at University of Southern California who studies virtual human interactions.
‘When you feel something has emotion, it now merits protection from harm.’
When a robot ‘dies,’ does it make you sad? For lots of people, the answer is ‘yes’ – and that tells us something important, and potentially worrisome, about our emotional responses to the social machines that are starting to move into our lives
The way robots are designed can influence the tendency people have to project narratives and feelings onto mechanical objects, said Julie Carpenter, a researcher who studies people’s interaction with new technologies.
Especially if a robot has something resembling a face, its body resembles those of humans or animals, or just seems self-directed, like a Roomba robot vacuum.
‘Even if you know a robot has very little autonomy, when something moves in your space and it seems to have a sense of purpose, we associate that with something having an inner awareness or goals,’ she said.
Such design decisions are also practical, she said. Our homes are built for humans and pets, so robots that look and move like humans or pets will fit in more easily.
Some researchers, however, worry that designers are underestimating the dangers associated with attachment to increasingly life-like robots.
Longtime AI researcher and MIT professor Sherry Turkle, for instance, is concerned that design cues can trick us into thinking some robots are expressing emotion back toward us.
Some AI systems already present as socially and emotionally aware, but those reactions are often scripted, making the machine seem ‘smarter’ than it actually is.
‘The performance of empathy is not empathy,’ she said. ‘Simulated thinking might be thinking, but simulated feeling is never feeling. Simulated love is never love.’
Designers at robotic startups insist that humanizing elements are critical as robot use expands.
The shadow of the Mars Exploration Rover Opportunity as it traveled farther into Endurance Crater in the Meridiani Planum region of Mars. People took to social media this year to say goodbye to the Mars Opportunity rover when NASA lost contact on June 10, 2018
Opportunity’s mission officially terminated after 15 years, following a dust storm that caused NASA to lose contact
‘There is a need to appease the public, to show that you are not disruptive to the public culture,’ said Gadi Amit, president of NewDealDesign in San Francisco.
His agency recently worked on designing a new delivery robot for Postmates – a four-wheeled, bucket-shaped object with a cute, if abstract, face; rounded edges; and lights that indicate which way it’s going to turn.
It’ll take time for humans and robots to establish a common language as they move throughout the world together, Amit said. But he expects it to happen in the next few decades.
But what about robots that work with kids? In 2016, Dallas-based startup RoboKind introduced a robot called Milo designed specifically to help teach social behaviors to kids who have autism.
The mechanism, which resembles a young boy, is now in about 400 schools and has worked with thousands of kids.
It’s meant to connect emotionally with kids at a certain level, but RoboKind co-founder Richard Margolin says the company is sensitive to the concern that kids could get too attached to the robot, which features human-like speech and facial expressions.
So RoboKind suggests limits in its curriculum, both to keep Milo interesting and to make sure kids are able to transfer those skills to real life.
Kids are only recommended to meet with Milo three to five times a week for 30 minutes each time.
WHY ARE PEOPLE SO WORRIED ABOUT AI?
It is an issue troubling some of the greatest minds in the world at the moment, from Bill Gates to Elon Musk.
SpaceX and Tesla CEO Elon Musk described AI as our ‘biggest existential threat’ and likened its development as ‘summoning the demon’.
He believes super intelligent machines could use humans as pets.
Professor Stephen Hawking said it is a ‘near certainty’ that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.
They could steal jobs
More than 60 percent of people fear that robots will lead to there being fewer jobs in the next ten years, according to a 2016 YouGov survey.
And 27 percent predict that it will decrease the number of jobs ‘a lot’ with previous research suggesting admin and service sector workers will be the hardest hit.
As well as posing a threat to our jobs, other experts believe AI could ‘go rogue’ and become too complex for scientists to understand.
A quarter of the respondents predicted robots will become part of everyday life in just 11 to 20 years, with 18 percent predicting this will happen within the next decade.
They could ‘go rogue’
Computer scientist Professor Michael Wooldridge said AI machines could become so intricate that engineers don’t fully understand how they work.
If experts don’t understand how AI algorithms function, they won’t be able to predict when they fail.
This means driverless cars or intelligent robots could make unpredictable ‘out of character’ decisions during critical moments, which could put people in danger.
For instance, the AI behind a driverless car could choose to swerve into pedestrians or crash into barriers instead of deciding to drive sensibly.
They could wipe out humanity
Some people believe AI will wipe out humans completely.
‘Eventually, I think human extinction will probably occur, and technology will likely play a part in this,’ DeepMind’s Shane Legg said in a recent interview.
He singled out artificial intelligence, or AI, as the ‘number one risk for this century’.
Musk warned that AI poses more of a threat to humanity than North Korea.
‘If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea,’ the 46-year-old wrote on Twitter.
‘Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.’
Musk has consistently advocated for governments and private institutions to apply regulations on AI technology.
He has argued that controls are necessary in order protect machines from advancing out of human control