The silent force behind online echo chambers? Your Google search

Eugina Leung

Assistant Professor

Eugina Leung is an expert in technology and consumer judgment, exploring how it can hinder identity-based consumption.

A new, timely study reveals 90% of people unknowingly phrase search queries to match their existing beliefs—even when they’re not trying to.

According to lead author Eugina Leung, an assistant professor at Tulane University’s A. B. Freeman School of Business, that subtle bias can trap users in digital echo chambers, reinforcing views on everything from caffeine to COVID-19.


Published in the Proceedings of the National Academy of Sciences, the study tested nearly 10,000 participants and found that even AI tools like ChatGPT can deepen polarization unless search engines are redesigned to deliver broader perspectives. The fix? A simple algorithm tweak could help millions break out of their bubbles.

"When people look up information online—whether on Google, ChatGPT or new AI-powered search engines—they often pick search terms that reflect what they already believe (sometimes without even realizing it). Because today’s search algorithms are designed to give you ‘the most relevant’ answers for whatever term you type, those answers can then reinforce what you thought in the first place. This makes it harder for people to discover broader perspectives, Leung said.

For interviews, contact Roger Dunaway at roger@tulane.edu or 504-452-2906.

Eugina Leung

Assistant Professor

Eugina Leung is an expert in technology and consumer judgment, exploring how it can hinder identity-based consumption.