Chatbots are rivaling humans in unexpected ways
09-28-2025

Chatbots are rivaling humans in unexpected ways

subscribe
facebooklinkedinxwhatsappbluesky

Customers meet digital helpers every day. A bot answers a shipping question, an app recommends a size, or a robot brings towels to a hotel room. It raises a simple question. Does it matter if help comes from a person or a machine?

A new meta-analysis pulled together 327 experiments with 281,956 participants to compare robots, chatbots, and algorithms to human employees across many outcomes.

The results show that people are more skeptical of machine helpers on perceptions like warmth, yet their actual behaviors such as buying, choosing, or following advice are often on par.

The work comes from a team led by professor Holger Roschk at the Aalborg University Business School (AAUBS).

Chatbots and other artificial agents

The team looked at three kinds of automated agents. Robots work in the physical world with bodies and motors. Chatbots hold text or voice conversations. Algorithms analyze inputs and return outputs without conversation.

The researchers assessed perceptions, intentions, and real behaviors. The strongest pattern sits in behavior. People may not gush about machines, but when it is time to act, the difference between machine and human is small.

Across all data, automated agents scored a little lower than humans on traits like warmth. That gap shrinks when the outcome is behavior, where the difference is often trivial.

The team also found that context matters. In embarrassing tasks, chatbots match or even outperform humans on compliance and choice. In utilitarian tasks like calculating routes or predicting a wait time, algorithms do fine.

Chatbots seem less judgy

One line in the findings stands out. When the outcome is negative, people accept it more easily from a machine. The meta-analysis reports that customers take a denial, such as a rejected request, less personally from an agent that follows a standard rule.

“We also see that artificial agents, contrary to what you might expect, have certain advantages in situations where a negative response must be given,” said Roschk.

A prior study shows that people feel less judged by a service robot when buying products tied to stigma, which reduces discomfort and boosts acceptance. That aligns with the new meta-analysis pattern for chatbots in awkward digital encounters.

Privacy cuts down the social pressure. A machine cannot form opinions or gossip, and that small psychological shift changes choices.

Chatbot design makes a difference

Naming a robot or a chatbot helps them in conversational tasks, and verbal skill demands tilt the table toward them. A humanlike look helps robots a bit, but making chatbots look too human can backfire and reduce acceptance.

Speed and reliability help physical robots in repetitive jobs and help algorithms in number heavy decisions. That edge fades in high expertise roles, where flexibility, explanation, and judgment push people ahead.

Past research on algorithm aversion shows people drop trust in algorithms after seeing a single mistake, even when the same mistake by a person is forgiven. That bias explains why perceptions lag.

Even with that bias, the new meta-analysis finds behavior level outcomes are often equal. People value the output enough to click buy, follow advice, or choose a machine helper.

Humanlike traits are not always better

A utilitarian task focuses on functional payoffs like accuracy, cost, or time saved. A hedonic task focuses on enjoyment or experience. The study showed that algorithms fare better in utilitarian contexts and worse in hedonic ones.

The experts also discuss automated social presence, a theory that machines can feel socially present or machine present to us depending on cues. A recent concept paper argues both dimensions can shape reactions, not only the humanlike one.

Furthermore, humanlike design is not always the winner. In some cases, leaning into machine traits works better, especially for speed, rules, or privacy.

When human agents are favored

Empathy, improvisation, and complex explanations still favor human agents. The data shows gaps in warmth and humanlikeness, which matter in tense, personal, or high stakes conversations.

“We recommend that companies focus on using artificial agents in situations where they can relieve employees of physically or mentally demanding tasks,” said Roschk.

The most practical takeaway is to place machines where their strengths carry the work and let people handle the truly human parts. 

The study is published in the Journal of Marketing.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

News coming your way
The biggest news about our planet delivered to you each day
Subscribe