AI struggles with color metaphors that humans easily understand
07-13-2025

AI struggles with color metaphors that humans easily understand

Phrases like “feeling blue” or “seeing red” show up in everyday speech – and most people instantly know they mean feeling sad or angry. But how do we pick up those meanings? Do we learn them through seeing color in the world, or just by hearing how people use them?

A new study from the University of Southern California and Google DeepMind put that question to the test – comparing color-seeing adults, colorblind adults, professional painters, and ChatGPT to see what really shapes our understanding of colorful language: vision or vocabulary.

Lisa Aziz‑Zadeh, a cognitive neuroscientist at USC’s Dornsife Brain and Creativity Institute, headed the project with help from colleagues at Stanford, UC San Diego, Université de Montréal, and other centers.

ChatGPT uses an enormous amount of linguistic data to calculate probabilities and generate very human‑like responses,” said Aziz‑Zadeh. She wanted to know whether that statistical talent could ever replace the firsthand way people learn color through sight and touch.

How we read color metaphors

For the study, volunteers answered online surveys that asked them to match abstract nouns such as physics or friendship with a hue from a digital palette.

They also judged familiar metaphors like being “on red alert” and unfamiliar ones that called a celebration a very pink party.

Color‑seeing and colorblind adults gave almost identical answers, hinting that lifetime language exposure can stand in for missing retinal data. Painters, however, nailed the trickier metaphors more often, suggesting that daily, hands‑on work with pigments sharpens conceptual color maps.

ChatGPT produced steady associations too, but it stumbled on curveballs such as describing a burgundy meeting or reversing green to its opposite. When pressed to explain choices, the model leaned on culture.

“Pink is often associated with happiness, love, and kindness, which suggest that the party was filled with positive emotions and good vibes,” said ChatGPT.

Painters did better

Artists likely won because practice binds linguistic and sensorimotor knowledge; long hours mixing alizarin and ultramarine create a rich mental index of hue, lightness, and mood.

Earlier work shows that emotional links to colors track both universal patterns and cultural twists.

In the new data, painters spotted fresh metaphors 14 percent more often than non‑painters, a gap the authors tie to the depth of tactile memory.

That finding echoes classroom studies where drawing or sculpting helps students retain technical terms better than reading alone.

Experience beat AI

Barsalou’s grounded‑cognition model argues that every concept the mind stores reactivates sensory traces of how it was acquired. The USC results fit that idea: direct pigment play trumped pure sentence statistics when novelty appeared.

Large language models rely on pattern frequency, not felt experience; their success on common idioms shows how much culture leaves in text. Their failure on burgundy and inverted green hints at what culture omits, especially edges of meaning that seldom enter print.

The gap matters for safety because an AI assistant that misreads a color‑coded warning label could steer users toward hazards.

Adding camera input or haptic feedback – as multimodal systems like CLIP already attempt – might help close that gap.

Color without sight possible

One surprise was that adults born without red‑green vision still matched seething anger with red because language, not sight, planted the idea.

The outcome backs earlier surveys showing nearly universal links between red and dominance, or blue and calm, regardless of pigment perception.

Still, colorblind comprehension is not proof that vision is irrelevant. Participants reported noticing social cues such as traffic lights and lipstick shades through brightness and context, offering partial visual grounding even when hue channels were muted.

Grounding AI in senses

Researchers already experiment with models that link pixels to words, allowing systems to point at a crimson apple or sketch a turquoise nebula. Such pairing trains networks to ground vocabulary in wavelengths and textures, inching closer to the way toddlers learn.

Yet there is a cost. Multimodal models demand larger datasets and raise privacy concerns once cameras move from lab benches into public spaces.

Software designers will need governance frameworks that spell out who owns the visual streams, how long they persist, and what biases lurk in the collected scenes.

Without such guardrails, better color sense could come at the price of social trust.

AI might improve with color metaphors

“There’s still a difference between mimicking semantic patterns and the spectrum of human capacity for drawing upon embodied, hands‑on experiences in our reasoning,” said Aziz‑Zadeh. She believes fusing text with images, audio, or even olfactory streams will be key. 

Technically, that means new training pipelines where robots taste paint or wear camera‑equipped gloves.

Ethically, it means designing machines that know when they do not know, especially in domains like medicine, food safety, and aviation, where color signals life or death.

Color metaphors for daily learning

For readers, the study offers a nudge to engage more senses when learning. Writing with colored pens, mapping notes with sticky dots, or simply paying attention to sky hues can enrich both vocabulary and recall.

Meanwhile, if an AI chatbot claims your angry text is tinged chartreuse, treat the judgment lightly. Until models gain something like retinas, their color sense will remain an eloquent but secondhand story.

The study is published in the journal Cognitive Science.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

News coming your way
The biggest news about our planet delivered to you each day
Subscribe