Article image
05-10-2024

Spatial hearing: How humans know where sounds come from

Recent research has challenged a longstanding theory about how humans pinpoint the origins of sounds through spatial hearing — a discovery with significant implications for the future of auditory technology.

Originally proposed in the 1940s, the established model suggested that humans locate sounds using a complex network of neurons specifically tuned for spatial hearing.

This auditory spatial hearing model model has shaped the development of various auditory technologies, from hearing aids to smartphones, under the assumption that our auditory processing mirrors a highly specialized and intricate system.

However, recent findings from a team of researchers have turned this theory on its head. Their study demonstrates that, contrary to previous beliefs, our brains utilize a far less complex system for spatial hearing, similar to that of other mammals.

This revelation not only simplifies our understanding of human auditory processing but also paves the way for more efficient and adaptable hearing technologies.

How spatial hearing works

Also known as sound localization, spatial hearing is the ability of the auditory system to determine the location and direction of a sound source in three-dimensional space.

This ability is crucial for humans and many animals to navigate their environment, avoid dangers, and locate potential mates or prey. Key aspects of spatial hearing include:

  • Binaural cues: The brain uses differences in the timing and intensity of sound arriving at each ear to determine the location of the sound source. These cues are known as interaural time differences (ITDs) and interaural level differences (ILDs).
  • Monaural cues: The shape of the outer ear (pinna) and the head itself affect the frequency spectrum of the sound reaching the eardrum. These spectral cues help in determining the elevation of the sound source and whether it is coming from the front or back.
  • Head movements: Turning the head can help resolve ambiguities in sound localization, particularly in distinguishing between sounds coming from the front or back.
  • The precedence effect: When a sound is followed by its echo, the auditory system gives precedence to the first-arriving sound, suppressing the perception of the echo. This helps in localizing sounds in reverberant environments.
  • Auditory scene analysis: The brain can separate and group sounds from multiple sources based on their spatial locations, allowing us to focus on a particular sound source in a noisy environment (e.g., the “cocktail party effect”).

Spatial hearing is a complex process involving the interaction of various neural mechanisms in the brainstem, midbrain, and cortex.

Spatial hearing and auditory processing

The research, conducted by a distinguished team from Macquarie University, utilized advanced hearing tests and brain imaging techniques to compare human auditory processing and spatial hearing with that of other mammals, including rhesus monkeys and gerbils.

What they discovered was surprisingly straightforward: rather than relying on a dense network of specialized neurons, both humans and other species employ a sparser and more energy-efficient neural network for spatial hearing.

“We like to think that our brains must be far more advanced than other animals in every way, but that is just hubris. We’ve been able to show that gerbils are like guinea pigs, guinea pigs are like rhesus monkeys, and rhesus monkeys are like humans in this regard,” shares Distinguished Professor David McAlpine, the lead researcher.

Implications for hearing technology and beyond

This simpler approach to auditory processing is not only fundamental for spatial hearing but also crucial for distinguishing speech from background noise. This ability is vital in what’s known as the ‘cocktail party problem’ — the challenge of focusing on a single voice in a noisy environment.

Current technologies, including smart devices and hearing aids, struggle with this issue, but the new findings suggest a shift towards simpler, more efficient neural models could lead to significant improvements.

Professor McAlpine suggests that the focus should move away from large language models, which are effective in predicting text but may be less suited for auditory functions. “LLMs are brilliant at predicting the next word in a sentence, but they’re trying to do too much,” Professor McAlpine remarks.

Instead, he advocates for a model that emulates the ‘shallow brain’ strategy used by humans and other animals, which involves picking out brief sound snippets and using them to identify the sound’s location and possibly its source, without the need for processing the language.

The future of auditory research

The team is now set to explore further how minimal auditory information can still achieve maximum spatial hearing resolution.

Their findings not only enhance our understanding of human and animal hearing but also hold the potential to revolutionize how machines listen and interact with their environment.

By embracing the simplicity of our ‘gerbil brain,’ future technologies might be able to more effectively parse speech and sound in real-world settings, making them more intuitive and accessible for users worldwide.

This research marks a significant shift in our approach to designing auditory systems, both biological and mechanical, and could lead to a new generation of hearing devices that are both more natural in their function and more effective in their application.

The full study was published in the journal Current Biology.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

News coming your way
The biggest news about our planet delivered to you each day
Subscribe