Social Media Is Junk Food for Information Foragers

Social media exploits our evolved need for information, feeding us fluff and outright misinformation. A new science of human collective behavior can help us retake control

An illustration shows a person falling into the screen of a smartphone

Rob Dobi/Getty Images

In June U.S. Surgeon General Vivek Murthy suggested that social media should, like tobacco products, carry a surgeon general’s warning about the harm it can cause users. His call came amid a vigorous national debate about social media’s role in the reported nationwide decline in mental health among young people.

Many share Murthy’s concern that social media platforms such as Instagram and TikTok pose threats to individual well-being (especially in youth). Some are skeptical, however. Meanwhile others debate national security risks. Regardless of who is right, we wouldn’t be having this heated national conversation about appropriate responses to social media platforms if it were not for their appeal. Why are we so captivated by the social media experience?

As biologists who have used and studied social media for many years, we see an answer in the evolution history of our species. Over the past few millennia, humans have developed the technologies we need to fulfill our evolved desires. Climate-controlled homes shelter us from wind, rain and snow. Medications relieve our aches and pains. In a world where calories were scarce, we evolved to desire foods rich in sugar and fat. Now we walk a daily gauntlet through junk food aisles and face an epidemic of opioid misuse. We have so successfully mastered our environment that we poison ourselves with too much of a good thing.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Something similar is happening with information. We are, after all, a species of information foragers. We spend our lives learning about our world and how to manipulate it. We forage individually, noticing patterns, experimenting, learning the signs of opportunity or danger and testing mental predictions. We also forage collectively, watching and copying one another, teaching our children and peers, sharing observations, gossiping and making collective sense of the world.

We have evolved to enjoy learning for its own sake. Evolution has influenced behaviors that, in some environments, are associated with reproductive success: sleep when tired, take shelter when cold, eat when hungry and have sex when feeling desire. These behaviors, learning included, increase our evolutionary fitness, so we have evolved to find them pleasurable.

Historically, information was scarce. But over the past 30 years, we have fashioned the Internet into a direct pipeline that delivers an endless stream of information. And yet, despite unlimited access to the most powerful information retrieval system ever created, we spend much of our time engaging with fluff at best and disinformation at worst.

We evolved to forage for information in naturalistic settings, exploring our physical environment and interacting with groups of at most a few hundred people. Our habits and heuristics work well in those settings. But today we forage in complex digital spaces, connected to networks comprising millions of people and guided by some of the most elaborate technologies ever developed.

Moreover, we forage today in an adversarial arena. The online spaces where we spend our time have been constructed to exploit our evolved psychology. The Web runs on advertising dollars, so platforms compete fiercely for our attention. To win this contest, information technology companies draw upon a vast wealth of behavioral data compiled from our use of their websites. It would take extraordinary optimism to expect that our evolved modes of information foraging will continue to function well in this new synthetic environment.

We are locked in an arms race with online platforms that use our behavior as a guide for driving engagement. When we collectively settled for a World Wide Web driven by advertising revenue, we hitched ourselves to the consequences of an unregulated commercial free-for-all to capture our attention.

Because we forage for information collectively, changes in individual behavior alter our shared understanding of the world. When algorithms show us inflammatory material and highlight instances of conflict, for example, we may perceive society to be more polarized than it really is. This not only generates stress and anxiety but also threatens our very principles of government. If we can be convinced that half of our fellow voters are irrational or evil, we lose faith in the concept of democracy. It’s no coincidence that when some polarizing event occurs—say, a mass shooting or a court decision about abortion—foreign adversaries rush to amplify both sides of the conflict on social media.

Thus far we have painted a dismal picture. Humans evolved to crave information, and these hungers explain much of the quagmire we find ourselves in.

So do social media platforms need warning labels? Should Congress ban TikTok? First of all, these are very different questions. We need to think clearly about our motives and distinguish our concerns about national security from our concerns about how social media impacts mental health.

On the national security side, if we’re afraid foreign adversaries are tracking our online activity, we ought to regulate the data brokers that sell this information to them already (an issue that was addressed in a recent presidential executive order). If we’re afraid of propaganda potential, we need to regulate the use of algorithmic content delivery on a range of platforms and require that users have the option to opt out of algorithms.

On the public health side, we need to not only address the business practices that exacerbate the crisis but also recognize the root cause: the way that our evolved desires leave us vulnerable to these new technologies.

These problems won’t fix themselves, and we consider it unlikely that media conglomerates will ever acquiesce to the large-scale changes required to support better, safer, healthier social media practices. And the specter of generative artificial intelligence—with its unlimited capacity to fabricate information faster than we can read it—poses an additional challenge.

Still, we need not resign ourselves to accepting the present harms. The human capacity for metacognition—which includes the ability to step back and evaluate how our actions affect our best interests—is just as much a part of who we are as are our foraging instincts.

Change won’t be easy, but it is essential. If we hope to manage the existential threats facing our world—racism, war, food insecurity, extinction, climate change, pandemic disease, violent extremism—society needs reliable information. To get there, we need a new science of human collective behavior to understand how digital information technologies shape beliefs, opinions, feelings and decisions on a global scale. Resolving this crisis will require transparency about social media use and social media harms. It will require thoughtful planning to balance redress for these harms against fundamental human rights to speech. And most of all, it will require a will to act so that technology may serve us instead of us serving technology.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

Carl T. Bergstrom is a professor in the department of biology and a faculty member at the Center for an Informed Public at the University of Washington. Over the past decade, he has written numerous scholarly articles on the spread of misinformation. He is co-author of the undergraduate textbook Evolution and of Calling Bullshit: The Art of Skepticism in a Data-Driven World.

More by Carl T. Bergstrom

C. Brandon Ogbunu is an assistant professor in the department of ecology and evolutionary biology at Yale University and an external professor at the Santa Fe Institute. In addition to his work on disease evolution and epidemiology, he has published articles at the intersection of science, technology and culture in various venues. These include several that explore data ethics, algorithmic bias and misinformation.

More by C. Brandon Ogbunu