A new paper reports that human brains organize visual information using a shared pattern that survives big individual differences. The team recorded from 19 people with epilepsy and found a common structure in how high level visual areas respond to images.
That shared structure helps explain why two people, with very different wiring, still agree on what they see. The work comes from Reichman University and the Weizmann Institute of Science in Israel.
The study was led by Ofer Lipman at Reichman University, working with collaborators at the Weizmann Institute of Science.
Lipman analyzed responses while volunteers viewed brief pictures of faces, houses, tools, bodies, patterns, and animals.
The patients already had electrodes implanted for clinical care, which allowed direct recording from the visual cortex.
The researchers focused on intracranial EEG (iEEG), electrodes that record activity from the brain surface or depth, to catch fast signals that standard MRI cannot see.
Each image triggered a burst of activity across many contacts. The team summarized those bursts into patterns and compared the patterns between images, rather than comparing raw channels one by one.
There is strong evidence that high frequency activity in such recordings tracks local neuronal firing. That made these fast measures a good probe of how populations of neurons encode what we see.
Instead of asking whether two brains activate the same exact contacts for a dog, the team asked whether the relation between dog and cat looks similar across brains. They found that it does.
This idea builds on a research method that compares how the brain reacts to different images. Instead of looking at single responses, scientists study the overall pattern.
If one person’s brain shows that cats and dogs trigger similar reactions, while elephants trigger a very different one, other people’s brains usually show the same kind of grouping.
In other words, our brains organize what we see in roughly the same way: things that are alike end up stored in nearby ‘neural neighborhoods,’ even though the exact brain cells involved may differ from person to person.”
The group compared three coding schemes. First, activation pattern coding, which assumes the same orientation of activity across brains for each image.
Second, relational coding, the pattern of distances among neural responses, which the authors formalize as similarity preserved under rotation across brains.
They describe the transform as an orthogonal transformation, a rotation that preserves angles, not raw magnitudes.
Third, a more flexible linear model that can stretch and skew patterns. In tests on held out images, relational coding generalized better than both the activation pattern and the linear models.
“Shared” does not mean identical neurons do identical things in every person. The actual contacts that lit up for a dog often differed across people.
What stayed stable was the geometry among responses, the way categories sit relative to one another in high-level visual cortex. That geometric stability is enough to align perception.
“This study brings us one step closer to deciphering the brain’s ‘representational code’,” said Lipman, underscoring the goal of turning messy neural activity into a readable scheme that supports shared perception.
Independent artificial networks trained on the same task often end up with similar internal layouts, up to a rotation. Recent work shows that a measure called centered kernel alignment can capture this kind of match.
The brain results echo that pattern. Brains differ in the fine details, yet preserve relations that matter for recognition.
Relational coding did better not only when the model learned on some images. It also won when the team tested on different images, which is the real hurdle.
The linear model looked stronger on training data but slipped on validation, a common sign of overfitting. The simpler relational account held steady.
The key signals were bursts in the high frequency range recorded by iEEG. These fast bursts carry detailed information about local neuronal activity.
By averaging responses within a short window after image onset, the team captured consistent patterns while leaving room for timing differences between people.
The data come from patients with epilepsy and from specific recording locations determined by medical needs. That limits coverage of the whole visual system.
The number of images per category was modest, and the categories were discrete. Real life vision is richer and more continuous.
Anatomically nearby contacts in different people did show some tuning similarity. The effect was weak at fine scale, which argues against a simple one to one anatomical match at the level of single contacts.
Randomly pairing contacts between brains did not erase the relational advantage. That supports the idea that the geometry itself, not perfect channel matching, carries the stable code.
Shared relational structure helps two strangers label the same scene with the same words. That supports communication and cooperation in crowded, noisy settings.
It also hints at why people with very different experiences still agree on basic categories. The common layout in high level cortex keeps category neighborhoods stable.
Future work can test whether the same relational code holds for motion, textures, or scenes with multiple objects. Naturalistic movies could probe whether the geometry stays stable over time.
Another test is whether training shifts the layout in predictable ways. If expertise with birds compresses bird distances, the model should pick up the change.
A stable relational geometry could guide better brain computer interfaces. Decoders that respect distances, not just raw patterns, may generalize across users more reliably.
It could also help align human and machine perception. If models match the human geometry more closely, their predictions about behavior should improve.
The message is not that brains look the same inside. The message is that what matters for perception may be the shape of relations, which many brains can share despite their differences.
The study is published in Nature Communications.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
—–