AI soundtracks stir stronger emotions than music composed by humans
07-28-2025

AI soundtracks stir stronger emotions than music composed by humans

Lately, if a soundtrack in a streaming clip stirred your emotions more than usual, it might not have come from a composer at all. A new study shows that AI-generated music doesn’t just fill space – it can move us more than the real thing.

The new study showed 88 volunteers the same short films while biometric sensors tracked their eyes and skin. After each viewing, participants rated how they felt, and the equipment logged subtle shifts that even they could not sense.

Nikolaj Fišer of the Universitat Autònoma de Barcelona (UAB) led the research with colleagues at RTVE in Barcelona and the University of Ljubljana. His team pooled expertise in communication science, media psychology, and audio engineering.

AI tools reshape sound

Generative artificial intelligence tools turn plain language prompts into full songs. Platforms such as Suno and Stable Audio let users specify mood, tempo, and instrumentation, already powering thousands of YouTube reels.

A 2024 market report predicts the AI music segment will top $3 billion by 2028. Analysts tie that growth to advertisers and indie game studios hungry for cheap, custom soundtracks.

The Barcelona team asked whether synthetic tracks do more than fill silence. Their three test conditions paired identical visuals with a traditional cinematic score, an AI track built from a rich prompt, and another AI track based on a bare-bones prompt that only listed emotional values.

AI music boosts response

Eye tracking showed wider pupil dilation with both AI tracks than with the human score. The average increase of 0.18 millimeters is a reliable marker of stronger emotional arousal.

“Both types of AI-generated music led to greater pupil dilation and were perceived as more emotionally stimulating compared to human-created music,” said Fišer.

The observation matches earlier work linking dilated pupils to heightened sound induced excitement.

The findings suggested that decoding the emotional information in AI generated music may require greater cognitive effort. Detailed-prompt music also raised blink frequency by about 20 percent and boosted galvanic skin response, two signs of heavier cognitive processing.

Those physiological changes never reached awareness. Self report scores on mood stayed neutral, underscoring how bodies can signal strain or thrill before minds notice.

Stretching memory with music

Working memory has strict limits, and sensory streams vie for capacity.

When music grows dense, the brain allocates more resources to decoding rhythm and harmony. But despite the brain’s limits, the volunteers recalled the scene details on a follow-up quiz given by the researchers.

Moderate cognitive stretching might even help memory by sharpening focus. Researchers have seen similar effects when suspenseful scores accompany learning videos in classrooms.

The comfort of convention

Participants labeled the human score most familiar even though none had heard it before. Hollywood conventions, minor keys for tension, swelling strings for relief, prime ears to tag certain tonal shapes as known, and familiarity often aligns with preference.

AI tracks sounded novel by contrast. Novelty itself widens pupils and drives exploratory attention, a link shown in infant work using out of scale melodies.

Familiarity speeds prediction, letting brains focus on surprises instead of every beat. Predictive coding models suggest that once AI engines fully mirror those statistics, listeners may stop guessing who wrote the music.

Cheaper music, same punch

Production budgets shrink when music can be drafted in seconds instead of weeks. Independent filmmakers already replace temp library cues with AI stems, and the new physiology data hints those placeholders may survive to final release because the emotional punch is strong.

Meanwhile, concert halls see opportunity. Hybrid shows where algorithms improvise beside live musicians sold out pilot events in Los Angeles this spring.

Licensing rules, however, remain unsettled. The U.S. Copyright Office says fully AI generated works lack protection, muddying royalty streams for tracks that still rack up millions of plays.

Asset managers notice the turmoil. Stock music libraries now add AI only shelves and price them lower, betting high volume will offset thinner margins.

Music schools are responding with new modules on human-AI collaboration. Students who once studied counterpoint now also learn prompt design.

Ethics of algorithmic emotion

Hidden algorithms raise new questions about consent. Viewers on social media never sign forms telling them a guitar riff underneath might be steering the mood.

Media psychologists caution that pairing sentiment analysis with instant music could let advertisers tweak emotions on the fly.

Artists worry about training data. Lawsuits in New York and London argue that scraping copyrighted recordings violates performers’ rights.

Fišer’s group proposes metadata flags describing arousal targets so editors can set upper limits in children’s or therapeutic content. Regulators in Europe may move first, as the draft EU Artificial Intelligence Act already calls for clear labels on synthetic audio.

For now, the lesson is simple. The score behind your next online video may come from a server rack, and your body may cheer before your brain notices.

The study is published in the journal PLOS One.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

News coming your way
The biggest news about our planet delivered to you each day
Subscribe