It’s easy to believe that the internet is an open door to knowledge. A place where a user’s search leads to discovery. A tool for understanding new ideas, correcting mistakes, or seeing both sides of a debate. But in practice, something very different is happening.
Especially now, when polarization defines everything from public health to climate action to political opinion, many people go online hoping to learn – and come away even more convinced they were right all along.
The surprising part? These biases are not the fault of online search engines alone.
A study from Tulane University has exposed a quiet but powerful driver behind the entrenchment of opinion: the way people frame their questions in the first place. And even unbiased algorithms can’t save us from the consequences of our own wording.
The Tulane study offers a wide-angle look at how search behavior deepens existing beliefs. It’s not about misinformation. It’s not about manipulation. It’s about routine, everyday habits that unknowingly reinforce what people already think.
According to the researchers, most users choose search terms that reflect their current opinions, often without realizing it. The search engine then returns the most relevant results based on those terms – leading users into a loop of self-confirmation.
“When people look up information online – whether on Google, ChatGPT or new AI-powered search engines – they often pick search terms that reflect what they already believe (sometimes without even realizing it),” noted study lead author Eugina Leung, assistant professor at Tulane’s A. B. Freeman School of Business.
“Because today’s search algorithms are designed to give you ‘the most relevant’ answers for whatever term you type, those answers can then reinforce what you thought in the first place. This makes it harder for people to discover broader perspectives.”
This pattern held steady across 21 experiments with nearly 10,000 participants. Regardless of the subject – caffeine, nuclear power, COVID-19, or crime statistics – people shaped their searches to match their beliefs.
Someone confident that caffeine is healthy might search for “benefits of caffeine,” while a skeptic would look up “caffeine health risks.” That small difference pushed them toward vastly different sources, perspectives, and conclusions.
The team also explored whether users were consciously seeking validation through their searches. Surprisingly, most weren’t. In several experiments, fewer than 10% of participants admitted they were deliberately using search to find results that supported what they already believed.
Yet the majority still typed queries that aligned with their personal views. Their search patterns didn’t change, even when their intent wasn’t to confirm a bias.
That’s what makes this issue so subtle. People can fall into biased search loops even when they think they’re being neutral. Search engines prioritize what they interpret as relevance, so they reward the exact words entered.
The result is a kind of algorithmic loyalty to the user’s original phrasing, whether or not that phrasing reflects a full picture of the topic.
The issue isn’t just with search engines. AI tools like ChatGPT also reflect users’ biases. When queries are slanted, AI responses often match the tone and viewpoint of the question.
Even if other views are mentioned, the search still supports the user’s original belief. These tools seem neutral but rely heavily on how users frame their questions.
The researchers tested multiple approaches to see if they could encourage people to seek out more balanced information. Merely advising users to consider different views didn’t work. Neither did suggesting that they run multiple types of searches. Behavioral nudges and awareness campaigns weren’t enough.
But one change worked consistently: modifying the algorithm itself. In one experiment, researchers adjusted the search tool to deliver a broader range of articles – regardless of how narrowly the query was phrased – exposing users to more diverse perspectives.
This had a clear effect: users were more likely to adopt moderate views, showed greater openness to changing their behavior, and didn’t find the broader results less helpful or accurate.
Participants appreciated the balanced approach, rating the mixed results as just as relevant as the ones that matched their original wording.
That challenges the idea that users only want search content that aligns with their beliefs and suggests they will value a fuller view – so long as they aren’t required to search for it themselves.
The researchers proposed a “Search Broadly” feature – an alternative to Google’s “I’m Feeling Lucky.” Rather than showing the most relevant result, it would present diverse sources.
Users responded positively, showing interest in tools that challenge rather than reflect their views. This approach isn’t about restricting content, but about design.
By prioritizing perspective over confirmation, algorithms could interrupt polarization at its source: the search query.
Search engines and AI platforms aren’t neutral containers. They’re decision-makers. Every search result order, every answer phrasing, is based on calculations of relevance, popularity, or usefulness.
And these calculations reflect assumptions about what the user wants. But what if those assumptions are wrong? What if users aren’t best served by hearing what they expect?
“Because AI and large-scale search are embedded in our daily lives, integrating a broader-search approach could reduce echo chambers for millions (if not billions) of users,” noted Professor Leung.
“Our research highlights how careful design choices can tip the balance in favor of more informed, potentially less polarized societies.”
The study is published in the journal Proceedings of the National Academy of Sciences.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
—–