Music rewires the brain to quickly navigate emotion
07-03-2025

Music rewires the brain to quickly navigate emotion

Music can make a sleepy train ride feel romantic or turn a routine workout into an emotional burst of energy.

Scientists have long wondered how a string of notes pushes the brain from one emotional feeling to the next, and why the experience changes when the previous feeling was different. Matthew Sachs of Columbia University and colleagues set out to answer that puzzle.

A wordless path to emotion

Music reliably raises specific emotions across cultures, making it a useful tool for lab work. Because melodies lack words, researchers can study pure feeling without the baggage of stories or language.

In recent surveys, listeners reported at least thirteen distinct emotional colors from songs, ranging from awe to tenderness. Those varied reactions let scientists choreograph controlled “emotional journeys” by arranging passages that lean happy, tense, calm, or sad.

The brain also treats music as a social signal, recruiting regions tied to empathy and perspective‑taking. That overlap makes findings with music relevant to understanding everyday interactions.

Music’s precise timing helps investigators lock brain activity to exact seconds in a tune. Such temporal precision is important when studying transitions, which can unfold in a blink.

Finally, unlike pictures or movies, songs can be replayed in multiple orders without breaking logic. That flexibility allowed the team to test how a given passage feels when it follows joy versus when it follows anxiety.

Music changes emotion fast

Sachs’s group asked professional composers to craft two fifteen‑minute pieces split into thirty‑two “emotion events,” each one lasting about 30‑70 seconds. The passages covered calm, joy, sadness, anxiety, and a mixed nostalgic feeling.

Two versions of each piece shuffled the same events so that a joyful segment might follow calm in one version but chase sadness in the other.

This context swap let the researchers isolate the effect of prior mood. Thirty‑nine volunteers lay in an fMRI scanner, a technique that tracks blood flow as a stand‑in for neural activity. The scanner’s pulses were timed to the beat so listeners heard smooth audio through special headphones.

After scanning, the volunteers replayed short clips of music online and rated how strongly each clip made them feel the target emotion. Those ratings confirmed that the music hit its emotional marks at the intended moments.

To align brains that never look identical, the team used a Hidden Markov model, a statistical tool that spots stable patterns and the moments they shift. The model does not assume equal event lengths, matching how real feelings vary.

Music mapped onto brain activity

Activity patterns in auditory cortex and the adjoining superior temporal gyrus stayed steady within an event but flipped sharply at composer‑defined boundaries. Those flips still appeared after factoring out pitch, volume, and harmony, meaning the brain was tracking emotion rather than mere acoustics.

The same temporal‑parietal strip has been linked to social meaning in speech and laughter. Current findings add music‑evoked emotion dynamics to its résumé.

Elsewhere, frontal regions associated with self‑reflection lit up when feelings changed. That response supports prior work showing prefrontal roles in labeling and regulating emotion.

Importantly, subcortical reward hubs showed general arousal but did not map fine‑grained transitions. Those areas may mark intensity rather than shifts in category.

Together the data sketch a layered system: auditory regions sense change, temporal‑parietal regions label the new state, and frontal areas integrate the experience. That hierarchy resembles proposed models for narrative comprehension.

Past emotions change how you hear music

When a happy passage followed another positive passage, listeners’ brains switched to the new state about six seconds sooner than when the same passage followed a tense one. The volunteers also pressed the “happy” button nine seconds earlier in the same‑valence condition.

Context even changed the spatial pattern of activity during an event. Pairs of participants who heard a clip in the same order showed more similar neural signatures than pairs who heard it in opposite orders.

Such carry‑over echoes psychological findings that earlier feelings bias how we perceive later stimuli. The study quantifies that bias down to single‑second resolution.

Researchers call the stickiness of mood valence persistence, and extreme persistence can tip into emotional rigidity. Depression is often marked by this rigidity, trapping individuals in negative states.

Treating emotion with playlists

“We know that people who suffer from mood disorders or depression often demonstrate emotional rigidity, where they basically get stuck in an emotional state” said Sachs.

“Maybe we could take someone with depression, for instance, and use the approach we developed to identify neural markers for the emotional rigidity that keeps them in a very negative state.”

His comment matches growing interest in music‑based interventions. Unlike medications, playlists can be tailored rapidly and have few side effects.

If clinicians can spot delayed transition signatures in vulnerable patients, they might train quicker shifts through neurofeedback. Past work shows that people can learn to nudge targeted brain regions with real‑time feedback.

Therapists already use mood‑tracking apps; adding objective brain markers could improve accuracy. Personalized “emotion workouts” might teach flexible switching, similar to cognitive training for attention.

Beyond clinics, the findings could improve soundtracks in virtual reality or adaptive games. Systems might adjust musical cues to help users linger in, or exit from, certain feelings.

What science hasn’t tested yet

The study focused on adults in a scanner; children, musicians, or people with disorders may show different timelines. Future projects could compare those groups directly. 

Only five emotion labels were used. Real life holds many blended states such as awe or envy that might map differently. Subcortical areas did not show clear transition patterns here, but that may reflect scanner noise or event length. 

Higher‑resolution recordings could reveal subtler dynamics. Music is just one trigger; spoken stories, movement, or smell might drive other networks.

Combining modalities could pinpoint common versus unique pathways. Finally, the team manipulated valence but not arousal. Experiments that cross calm‑excited levels might tease apart the role of energy versus positivity.

The study is published in eNeuro.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

News coming your way
The biggest news about our planet delivered to you each day
Subscribe