Emotions have always been tricky. People often second-guess how others feel, and even well-trained professionals can miss signs of frustration or sadness.
Now, a new study suggests that artificial intelligence (AI) and large language models (LLMs) might be better at reading the room than many humans.
The work was led by the University of Geneva (UNIGE) and the University of Bern (UniBE). One of the key investigators was Dr. Katja Schlegel, a lecturer in the Division of Personality Psychology, Differential Psychology, and Assessment at the Institute of Psychology at UniBE.
The research team worked with six generative AIs, including ChatGPT, to see how well they could handle emotional intelligence questions.
Each question asked for the most effective reaction to a particular scenario, where someone was caught up in negative feelings. A real-world example might be deciding whether to talk to a supervisor about a stolen idea or keep quiet and stay resentful.
“This suggests that these AI systems not only understand emotions, but also grasp what it means to behave with emotional intelligence,” noted Dr. Marcello Mortillaro, senior scientist at UNIGE’s Swiss Center for Affective Sciences (CISA).
The team discovered that these AIs answered correctly more often than typical human participants.
Those working on the project reported that the participating AIs earned an accuracy rate of around 81 percent. Previous studies had placed the human average at 56% in the same tests.
Each AI was given prompts describing situations where someone was sad, worried, annoyed, or otherwise emotionally off balance, and they chose the best coping strategy from multiple-choice answers.
The researchers were surprised by how consistent these performance levels were. Even when the same question was posed multiple times, the AIs picked the most emotionally sound course of action.
“They proved to be as reliable, clear and realistic as the original tests, which had taken years to develop,” noted Dr. Schlegel.
The next step involved asking ChatGPT-4 to whip up brand-new emotional intelligence tests from scratch. The system produced fresh scenarios and answer sets faster than any typical human-led process.
Over 400 individuals tried these computer-generated quizzes. The researchers found that participants rated them as valid and coherent.
There were minor differences in content variety, but these new questionnaires held up well in test comparisons.
This exploration suggests a potential future where AI-based tutoring and personal development tools include emotional coaching.
Guidance counselors, team-building specialists, and leadership trainers might lean on these systems to create group exercises for conflict resolution. A user could receive direct suggestions for handling tense conversations with co-workers or relatives.
This does not mean AI is about to replace human interaction. Many situations still call for a personal touch. However, the data show that computers can be surprisingly efficient at mapping feelings and recommending balanced approaches.
Despite its strong performance on emotional intelligence tests, AI still lacks the ability to truly feel anything.
Emotional intelligence in humans includes not just choosing the right response, but sensing tone, body language, and context in real-time interactions, factors AI can’t fully interpret or replicate.
Even a perfect answer on a test doesn’t mean the AI could read the room during a live conversation or adjust based on shifting moods.
There’s also the issue of transparency. These models don’t tell us how they arrive at their answers. When it gets a question right, we don’t know if it’s following emotional logic or just regurgitating patterns it has seen.
That makes it hard to build trust in high-stakes areas like mental health care or mediation, where understanding emotions isn’t just useful, it’s essential.
Social robots and virtual coaches have been around for years, but they used to rely on simpler models that scanned facial expressions or voice tones.
Now, with massive text databases and more advanced language processing, AI can respond to emotional cues based on extensive learned patterns.
Some experts say these machines might keep evolving. Others warn that cultural nuances can confuse AI, because emotional norms differ by region and background. Despite that, many see a benefit in using software that captures the universal aspects of stress, sadness, or joy.
Just because AI can pick the right answers doesn’t mean it should act on them without supervision. Emotionally intelligent behavior still needs a human filter, especially when the stakes are high.
Whether it’s advising a student in distress or guiding workplace decisions, people must remain in control of how these tools are applied.
There’s also the risk of over-reliance. If users start trusting AI more than their own judgment, or begin offloading sensitive decisions to it entirely, that could dull emotional growth and accountability.
Experts warn that while AI can support learning, it should never replace human empathy or relational wisdom.
The study sheds light on how AI systems might one day collaborate with humans in education, training, and mediating disputes.
Their knack for spotting the best course of action in emotional dilemmas raises questions about how we can use them in coaching, therapy, and everyday life.
Time will tell if these systems keep improving or if there’s a plateau in how much they can mimic our own self-awareness. What is certain is that they have already shown a knack for matching, and even surpassing, many folks in emotional understanding.
The study is published in Communications Psychology.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
—–