As artificial intelligence (AI) becomes increasingly integrated into our lives, it is important to consider how it impacts different groups of people. A recent study by Baycrest, a research institute in Canada, found that older adults may have more difficulty distinguishing between computer-generated speech and human speech compared to younger adults. This could potentially put them at a higher risk of being taken advantage of, for example in scam calls.
The study was led by Dr. Björn Herrmann, Baycrest’s Canada Research Chair in Auditory Aging and Scientist at the Rotman Research Institute. “Findings from this study on computer-generated AI speech suggest that older adults may be at a higher risk of being taken advantage of,” said Dr. Herrmann. “While this area of research is still in its infancy, further findings could lead to the development of training programs for older adults to help them navigate these challenges.”
The study was the first to examine AI speech recognition in older adults. Younger and older participants listened to sentences spoken by 10 different human speakers and sentences created using 10 AI voices. In one experiment, participants were asked how natural they found the human and AI voices to be. In another, they were asked to identify whether a sentence was spoken by a human or by an AI voice.
The results showed that compared to younger adults, older adults found AI speech more natural and were less able to correctly identify when speech was generated by a computer. The reasons for this remain unclear, but Dr. Herrmann and his team have ruled out hearing loss and familiarity with AI technology as factors. It could be related to older adults’ diminished ability to recognize different emotions in speech.
“As we get older, we seem to pay more attention to the actual words in speech than to its rhythm and intonation when trying to get information about the emotions being communicated,” said Dr. Herrmann. “It could be that recognition of AI speech relies on the processing of rhythm and intonation rather than words, which could in turn explain older adults’ reduced ability to identify AI speech.”
The results of this and future studies could help inform interactive AI technology for older adults. This kind of technology often relies on AI speech and has increasing applicability in medical, long-term care and other support spaces for older adults. For example, therapeutic AI robots can be used to comfort and calm individuals experiencing agitation due to dementia.
By better understanding how older adults perceive AI speech, we can ensure that AI technologies effectively meet their needs, ultimately improving their quality of life and helping them lead a life of purpose, inspiration and fulfilment. The study was supported by the Canada Research Chairs program and the Natural Sciences and Engineering Research Council of Canada.
The study does not directly address the susceptibility of older adults to fake news generated by AI speech. However, it does suggest that older adults may have more difficulty distinguishing between computer-generated speech and human speech. This could potentially make them more vulnerable to being taken advantage of in situations such as fraudulent or scam calls.
It is important to note that susceptibility to fake news involves many factors beyond the ability to identify AI speech. For example, factors such as cognitive decline, social isolation, and lack of media literacy can also contribute to susceptibility to fake news. Further research is needed to investigate the relationship between age-related differences in perception of AI speech and susceptibility to fake news.
The study is published in the International Journal of Speech Technology.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.