Experts Fighting Online Misinformation ‘Vindicated’ by Supreme Court Ruling

A recent Supreme Court decision rules that the U.S. government can talk to scientists and social media companies to curb online falsehoods

Matt Cardy/Getty Images

The US Supreme Court has ruled that the government can continue communicating with researchers and social-media platforms in an effort to reduce misinformation on subjects such as elections and vaccines. The decision stops short of declaring that such activities are protected as free speech under the US Constitution, but it nonetheless represents a win for researchers who continue to face lawsuits alleging that they worked with the government to suppress conservative opinions.

“This ruling is a major victory for independent research,” says Rebekah Tromble, who leads the Institute for Data, Democracy and Politics at George Washington University in Washington DC. “In rejecting the conspiracy theories at the heart of the case, the Supreme Court demonstrated that facts do still matter.”

In the past four years, US researchers have run two high-profile rapid-response projects — both led jointly by the Stanford Internet Observatory in California and by the University of Washington in Seattle — to track, report and counter disinformation, which deliberately intends to mislead, and misinformation, which is inaccurate but not necessarily fuelled by malicious intent. For both projects, the researchers flagged virally spreading falsehoods as quickly as they could to social-media companies and US agencies, and their reports were released publicly. At the same time, the federal government was also pointing out this type of problematic content to Facebook and other platforms.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Many conservative activists and politicians thought that these efforts were politically motivated and targeted Republican voices, including those who posted online — incorrectly and without any evidence — that the 2020 US presidential election was rigged. They have launched congressional investigations and brought multiple lawsuits, including the one against the US government that the Supreme Court decided today. This case was originally filed in May 2022 in a federal district court in Louisiana by plaintiffs including the then-attorneys-general of Missouri and Louisiana, both of whom had also challenged whether US President Joe Biden actually won the 2020 election.

In their ruling, the Supreme Court’s justices rejected the plaintiffs’ claims, noting that social-media platforms are private entities that began making their own decisions about how to moderate content well before the US government contacted them about misinformation. There is no specific evidence that government pressure to remove falsehoods unduly influenced those decisions, the court ruled; nor was there any evidence of harm.

The Supreme Court has yet to rule on a related case focused on US state regulations that attempt to limit social-media companies’ ability to regulate conversations on their platforms.

It remains to be seen how today’s ruling will impact lawsuits against scientists, but legal scholars and misinformation researchers contacted by Nature say that it represents a clear win for academic freedom.

“Elated. And vindicated” — that was the response to Nature from Kate Starbird, a misinformation researcher at the University of Washington in Seattle, who was involved in both academic efforts to combat misinformation.

Issuing tickets

One of the two rapid-response projects connected to today’s decision, the Electoral Integrity Partnership, kicked off in early September 2020 and ran for 11 weeks. Researchers scoured Twitter, Facebook and other social-media platforms for misinformation about the 2020 US presidential election and generated more than 600 ‘tickets’ to flag questionable posts to the platforms. Most of the false narratives identified were associated with the political right, and around 35% of them were later labelled, removed or restricted by the companies.

The researchers then adapted this model to create the Virality Project, which ran from February to August 2021. They identified more than 900 instances of problematic social-media posts relating to the COVID-19 pandemic, most of which focused on falsehoods about vaccines. Of those, 174 — the most serious cases — were referred to US health authorities, as well as social-media firms for potential action.

In addition to filing the lawsuit that the Supreme Court ruled on today, conservative activists have peppered researchers with public-records requests and filed other suits accusing the academics behind the two projects of colluding with the federal government to curtail free speech.

Researchers and legal scholars call the accusation absurd and say that there is no evidence for it. They also argue that academics, too, have a right to free speech, and that they were exercising that right by studying and flagging falsehoods online. The other lawsuits are likely to continue for now, Tromble says. However, she notes that the Supreme Court’s dismantling of the plaintiffs’ arguments in today’s ruling “should certainly help researchers being sued by individuals and organizations making similarly distorted claims”.

What’s next?

The lawsuits by conservative activists have sparked fear among misinformation researchers, and changes to the online environment have made it harder for them to carry out their work. For instance, after billionaire entrepreneur Elon Musk bought Twitter (now called X), he instituted policies that cut off academics’ access to data on the platform. Many other social-media companies have also backed away from efforts to moderate content for accuracy.

All of this has discouraged efforts by government agencies and academics to identify and counteract false narratives, says Gowri Ramachandran, a deputy director at the Brennan Center for Justice, a think tank in New York City that advocates for voting rights. She adds that this is of particular concern as the United States prepares for its next presidential election, in which Biden and Trump are already preparing to face off once again, in November.

For its part, Stanford University has ended its rapid-response work on the misinformation projects and laid off two staff members involved, although researchers will continue to work on election misinformation this year. The decision to halt the rapid-response work had nothing to do with a fear of litigation or congressional investigations, says Jeff Hancock, director of the Stanford Internet Observatory. Acknowledging fundraising challenges, he says that he decided to align the centre’s work with his own research interests after the previous director of the observatory left. However, conservative lawmakers and advocates celebrated the news.

“BIG WIN,” the US House of Representatives Judiciary Committee posted on X on 14 June. It then celebrated the “robust oversight” of committee leader Jim Jordan, a Republican representative from Ohio who voted to overturn the 2020 election results. Jordan has launched congressional investigations of academics linked to the election and health misinformation projects.

Regardless of why Stanford shut the programme down, the decision gives a boost to the politicians who undermined the 2020 election, says Renée DiResta, one of the staff members at the Stanford observatory whose position was not renewed this year. “The right wing is doing victory laps,” she says. “Every university should be horrified by this.”

Hancock acknowledges as much, saying he is frustrated at how things have played out. “I hate that this has been politicized,” he says.

With Stanford stepping aside, the kind of rapid-response work that the Electoral Integrity Partnership conducted in 2020 will now be led by Starbird and her team at the University of Washington. In 2021, the US National Science Foundation awarded her and her partners US$2.25 million over five years to continue their work on disinformation. Starbird says this year’s project will not look like 2020’s and will not include any reporting to social-media platforms — an activity that was conducted by Stanford. Nor will it focus so much on X: with the loss of access to data on that platform, she and her colleagues are turning to other sources, such as Instagram and TikTok. But the work will continue, she says.

“We were doing it before the electoral-integrity project, and we are going to be doing it after,” Starbird says.