In the age of AI-driven technology, how children interact with smart devices has become a topic of concern and curiosity. A recent study conducted by developmental psychologists at Duke University delved into the perception and treatment of smart speakers like Amazon’s Alexa by children, comparing their thoughts on its intelligence and sensitivity to that of the autonomous vacuum Roomba.
The study, published on April 10 in the journal Developmental Psychology, sought to understand how children aged four to eleven view these devices and their capabilities, as well as how they may treat them.
The inspiration behind the research came from lead author Teresa Flanagan’s observations of human-robot interactions in popular media such as HBO’s “Westworld.”
Flanagan, a visiting scholar in the department of psychology & neuroscience at Duke, explained: “In Westworld and the movie Ex Machina, we see how adults might interact with robots in these very cruel and horrible ways. But how would kids interact with them?”
To investigate this question, Flanagan and her team recruited 127 children visiting a science museum with their families. The children were shown a 20-second clip of both Alexa and Roomba in action, and then asked questions about each device.
Under the guidance of Dr. Tamar Kushnir, a Duke Institute for Brain Sciences faculty member and Flanagan’s graduate advisor, the researchers analyzed the survey data and uncovered some intriguing results.
The experts found that, overall, children aged four to eleven attributed more human-like thoughts and emotions to Alexa than Roomba. Despite this perceived difference in intelligence, the majority of children believed that neither device should be yelled at or harmed. However, this conviction seemed to wane as children approached adolescence.
Furthermore, the survey data revealed that children generally believed that both Alexa and Roomba were incapable of experiencing physical sensations such as ticklishness or pain when pinched.
Nevertheless, they credited Alexa with mental and emotional capabilities, like the ability to think or feel upset after someone mistreats it. This attribution did not extend to Roomba.
“Even without a body, young children think the Alexa has emotions and a mind,” Flanagan explained. “And it’s not that they think every technology has emotions and minds – they don’t think the Roomba does – so it’s something special about the Alexa’s ability to communicate verbally.”
This study also investigated children’s attitudes towards the ethical treatment of AI and machines, specifically Amazon’s Alexa and the autonomous vacuum Roomba.
“Kids don’t seem to think a Roomba has much mental abilities like thinking or feeling,” Flanagan said. “But kids still think we should treat it well. We shouldn’t hit or yell at it even if it can’t hear us yelling.”
The study found that, regardless of the perceived abilities of the two devices, children of all ages agreed it was wrong to hit or yell at the machines. Interestingly, as the children grew older, their views shifted slightly, indicating that attacking technology might become more acceptable.
“Four- and five-year-olds seem to think you don’t have the freedom to make a moral violation, like attacking someone,” Flanagan explained. “But as they get older, they seem to think it’s not great, but you do have the freedom to do it.”
These findings provide valuable insights into the evolving relationship between children and technology, raising important questions about the ethical treatment of AI and machines. They also prompt parents to consider whether they should model good behavior towards AI devices, such as expressing gratitude to Siri or its more sophisticated counterpart, ChatGPT.
Currently, Flanagan and her graduate advisor, Tamar Kushnir, are working to understand why children believe it is wrong to assault home technology. The researchers observed various responses from the participants, with one 10-year-old stating that yelling at the technology is not okay because “the microphone sensors might break if you yell too loudly.” By contrast, another 10-year-old expressed that yelling is inappropriate because “the robot will actually feel really sad.”
“It’s interesting with these technologies because there’s another aspect: it’s a piece of property,” Flanagan said. “Do kids think you shouldn’t hit these things because it’s morally wrong, or because it’s somebody’s property and it might break?”
This study highlights the unique way children perceive and interact with technology, particularly those that possess human-like communication abilities. As smart devices continue to pervade daily life, understanding these interactions may become increasingly important in shaping the design and implementation of future AI-driven technologies.
While it is difficult to predict the exact ways children will interact with AI in the future, scientists and researchers envision a world where AI plays a significant role in shaping children’s experiences across various domains, including education, entertainment, healthcare, and social development.
Researchers and educators will need to consider the potential risks and benefits of AI integration in children’s lives, ensuring that AI technologies are designed to promote healthy development, learning, and positive interactions. It will also be essential to monitor and study the long-term effects of AI on children’s cognitive, emotional, and social development, as well as to establish guidelines and best practices for AI usage among children.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.