Color feels personal, but your brain may be using a code that other brains share. A new study asks whether patterns of activity for different colors are similar enough across people to make useful predictions.
Instead of treating each person as a one-off case, the researchers tested whether one person’s brain activity could help identify the color another person was viewing.
The research was led by Michael M. Bannert from the University of Tübingen, and leans on simple ideas, careful mapping, and a healthy respect for what we still do not know.
The team used Functional Magnetic Resonance Imaging (fMRI) to record how oxygenated blood changes in visual areas while people viewed colored rings that moved across the screen. The signal is slow and indirect, but it captures consistent patterns that vary for different kinds of visual input.
For each individual, the researchers first built a detailed retinotopic map – a chart of which patch of cortex responds to which location in the visual field. The maps made it possible to align brains across people in a common frame tied to space on the retina.
With brains lined up this way, the team trained a linear classifier on color responses from one group of observers. They then tested whether that classifier could identify the hue and luminance that a new person was seeing using only the new person’s spatially aligned pattern of activity.
Earlier work showed that color could be decoded within a single individual’s brain. One influential study used patterns across many voxels to reconstruct which color a person was viewing from activity in the visual cortex.
The visual cortex is organized in maps, and nearby points in the scene activate nearby points on the cortical sheet. These maps let the brain keep track of where things are while it analyzes what they are.
Decades of work have charted areas in humans. A landmark review summarized how these maps tile the cortex and how researchers measure them with moving rings and wedges.
Because these maps are stable within and across people, they are ideal anchors for comparing functional responses. Aligning by visual space keeps the most meaningful relationships intact while avoiding crude anatomical matching.
This structure is also a practical bridge for machine learning. It enables a model trained on one group of people to operate on another without re-learning everything from scratch.
Even when two brains share information, the fine details of their response patterns differ. Methods that find a shared representational space across people can capture common structure while respecting individual quirks.
A leading framework known as hyperalignment aligns brains within an abstract information space instead of relying solely on anatomy. By capturing how shared content is represented across different cortical layouts, it improves decoding between subjects.
Building on that framework, the new work uses the spatial code from retinotopic mapping as its scaffold. This choice anchors the alignment in a property that can be measured with high reliability.
This approach reduces the amount of personal calibration needed for useful decoders. With less setup, the tools become faster and more affordable for research – and potentially for clinical use.
“We can’t say that one person’s red looks the same as another person’s red. But to see that some sensory aspects of a subjective experience are conserved across people’s brains is new,” said Bannert
Beyond identifying colors from other people’s data, the authors report large scale spatial biases for different hues that vary by area yet stay consistent across individuals. That pattern suggests each area carries its own flavor of color coding that generalizes well.
The team also decoded brightness without measuring a person’s color responses beforehand. That is a clean test of whether shared codes are strong enough to carry over.
Together, these findings point to a shared backbone for color information in early and ventral visual cortex. They also hint at functional pressures that shaped these codes over development and evolution.
If decoders can be trained on a few people and then used with many, color sensitive brain computer interfaces become easier to build. Such tools could help evaluate visual function when a person cannot give reliable behavioral responses.
Researchers can also study individual differences more precisely. When you have a common template, deviations become easier to spot and interpret.
These ideas reach beyond color. The same alignment logic can help compare brain patterns for faces, places, words, or sounds when tasks and scanners differ.
As models improve, they could help connect lab findings to real world vision. That bridge is where the next breakthroughs will likely happen.
The method works with group level statistics and carefully controlled stimuli. Real world scenes are messier, and fMRI is slow and noisy.
Shared neural codes do not settle the question of subjective experience. The authors are clear that similar patterns in cortex do not prove that two people’s qualia match.
Color signals interact with memory, attention, and context, which can shift responses in subtle ways. Future work will need to test how robust the shared code is under those pressures.
Ethical care is also required when building decoders for perception. Reading out mental content must respect privacy and consent.
The research is published by the Society for Neuroscience in the journal JNeurosci.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
—–