How do people react when seeing a robot being mistreated? Scientists found out
10-22-2025

How do people react when seeing a robot being mistreated? Scientists found out

When people watch a customer mistreat a service robot, reactions vary across the emotional spectrum. Some copy the rudeness, some feel concern and step in, and some will actually shed tears.

A team in South Korea traced what pushes those reactions. In controlled tests with more than 500 volunteers, they mapped the main paths behind copycat behavior and empathic help.

Why bystanders matter

Led by Professor Taeshik Gong at Hanyang University’s ERICA campus in South Korea, the team built a model for what onlookers learn in public service spaces. Their peer reviewed study shows that witnesses shape the social rules customers follow around robots.

Service robots are showing up in hotels, stores, airports, and clinics. A sweeping academic review finds that frontline robots now handle tasks from check in to customer support as firms test what works.

Many onlookers are not passive. They judge what is acceptable and what is not in seconds. Those snap judgments can spread through a line, a lobby, or a gate area.

Witness behavior also sends signals to staff. If mockery looks fine, employees may brace for wider disrespect. If kindness is the norm, tensions drop across the floor.

Observing mistreated robots

The team used brief videos of customer robot encounters to set a shared scene. That kept context steady while they manipulated what bystanders saw.

One study measured whether seeing mistreatment made rudeness feel more permissible to onlookers. Another measured whether seeing harm made the robot seem like a victim, which can drive helping intentions.

The researchers tracked two outcomes that matter for managers. They looked at willingness to be rude and willingness to help.

They also tested two factors that could nudge observers. One was design, and the other was the observer’s values.

People react to mistreated robots

First is behavioral contagion, referring to people copying others’ negative actions in the moment. When witnesses read a shove or insult as acceptable, their own bar for civility can drop.

Second is empathy, an emotional response to perceived harm. That response can spark prosocial behavior, these are actions that help or protect others.

These paths arise together, not in a tidy sequence. What tips the scale is what the observer sees and who the observer is.

Recent work shows third party empathy changes how customers respond after robot-involved failures. That prior result makes the new dual path tests especially useful for real service contexts.

Robot design affects empathy

Humanlike design can shift moral judgments. Decades of psychology research show that seeing human features makes people extend more moral care to nonhuman agents.

In this project, anthropomorphism (treating a nonhuman like a person) strengthened empathic responses and reduced incivility intentions. Details like expressive eyes and natural voices mattered when mistreatment occurred.

Observer values mattered too. Moral identity weakened the pull of contagion and amplified concern for harm.

Context also matters. A hotel lobby is not a lab, but norms still move fast. Small, visible cues can steer those norms toward respect.

What managers can apply

“Managers can design training protocols for employees and visible norm cues such as signage, scripted responses, and pre-recorded reminders to discourage customer mistreatment of robots,” explains Professor Gong.

Visible norms work best when they are consistent. Signs, staff responses, and the robot’s own prompts should point in the same direction.

A robot’s design should anticipate rough treatment. Robust hardware helps, but expressive cues that invite empathy during a tense moment may help more.

“Within a decade, we may see relevant codes of conduct, workplace guidelines, or even legal provisions,” stated Professor Gong. Policy talk is not premature. 

Mistreated robots shape norms

The two path model aligns with social learning theory. People learn rules by watching others’ outcomes. If the aggressor faces zero pushback, permissibility rises.

It also draws from feelings tied to judgments of right and wrong. When a robot looks and sounds more human, empathic concern becomes easier to trigger.

These are not abstract ideas. In real lines and lobbies, one incident can color the next ten minutes of behavior.

Managers can turn this to their advantage. By shaping cues and design, they can make empathy the easy default, not the exception.

What to watch next

Prepare for spillover effects. If robot abuse looks normal, some customers may generalize that permission toward human staff.

Building pattern-interruptions and norm interventions, brief (well-timed reminders that set expectations into both training and signage) can help. 

Audit robot scripts for tone. A calm, direct prompt that names the behavior and asks for respect can halt a budding pile on.

Track outcomes. Measure complaints, assist rates, and time to de-escalate after incidents. Small changes in design or messaging can move the needle.

The study is published in the Journal of Retailing and Consumer Services.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

News coming your way
The biggest news about our planet delivered to you each day
Subscribe