The silent force behind online echo chambers? Your Google search

Assistant Professor of Marketing Eugina Leung

In an era defined by polarized views on everything from public health to politics, a new Tulane University study offers insight into why people may struggle to change their minds—especially when they turn to the internet for answers.

Researchers found that people often use search engines in ways that unintentionally reinforce their existing beliefs. The study, published in the Proceedings of the National Academy of Sciences, shows that even unbiased search engines can lead users into digital echo chambers—simply because of how people phrase their search queries.

"When people look up information online—whether on Google, ChatGPT or new AI-powered search engines—they often pick search terms that reflect what they already believe (sometimes without even realizing it),” said lead author Eugina Leung, an assistant professor at Tulane’s A. B. Freeman School of Business. “Because today’s search algorithms are designed to give you ‘the most relevant’ answers for whatever term you type, those answers can then reinforce what you thought in the first place. This makes it harder for people to discover broader perspectives.”

Across 21 experiments involving nearly 10,000 participants, researchers tested how people search for information on major platforms. Whether users were researching caffeine, nuclear energy, crime rates or COVID-19, their search terms tended to be framed in a way that aligned with their preexisting opinions.

For example, people who believe caffeine is healthy might search “benefits of caffeine,” while skeptics might type “caffeine health risks.” Those subtle differences steered them toward drastically different search results, ultimately reinforcing their original beliefs.

The effect persisted even when participants had no intention of confirming a bias. In a few studies, fewer than 10% admitted to deliberately crafting their search to validate what they already thought, yet their search behavior still aligned closely with their beliefs.

The issue isn’t just how people search and how search engines respond. Most are designed to prioritize relevance, showing results closely tied to the exact words a user types. But that relevance can come at the cost of perspective.

This dynamic held true even for AI-based tools like ChatGPT. Although AI responses often briefly mentioned opposing views, users still came away with stronger beliefs that matched the slant of their original query.

The researchers tested several ways to encourage users to broaden their views. Simply prompting users to consider alternative perspectives or perform more searches had little effect. However, one approach worked consistently: changing the algorithm.

When search tools were programmed to return a broader range of results—regardless of how narrow the query was—people were more likely to reconsider their beliefs. In one experiment, participants who saw a balanced set of articles about caffeine health effects walked away with more moderate views and were more open to changing their behavior.

Users rated the broader results equally useful and relevant as the narrowly tailored ones. The findings suggest that search platforms could be crucial in combating polarization—if designed to do so. The researchers even found that most people were interested in using a “Search Broadly” feature—a button (conceptualized as doing the opposite of Google’s current “I’m feeling Lucky” button) that would intentionally deliver diverse perspectives on a topic.

“Because AI and large-scale search are embedded in our daily lives, integrating a broader-search approach could reduce echo chambers for millions (if not billions) of users,” Leung said. “Our research highlights how careful design choices can tip the balance in favor of more informed, potentially less polarized societies.”