The Politics and Perverse Effects of the Fight Against Online Medical Misinformation
abstract. Social media platforms’ moderation of medical misinformation has emerged as one of the most contentious political issues of our time. This Essay traces the evolution of platforms’ approach to medical misinformation during the COVID-19 pandemic and argues that the politicization of platforms’ actions is a product, at least in part, of the difficulty of defining “medical misinformation” as a coherent category. Going forward, platforms’ approach to moderating medical speech should reflect a principled view of the role they can, and cannot, play as gatekeepers of medical knowledge, rather than shifting with the political winds. That role, this Essay argues, and as First Amendment case law suggests, needs to be informed by the institutional characteristics of platforms as social spaces, and cannot assume—as platforms seemed to do at the start of the pandemic—that there is a sharp divide between medical and political speech.
Introduction
Social media platforms’ moderation of medical misinformation has emerged as one of the biggest political controversies of our time. It has been the focus of congressional committees,1 a Supreme Court case,2 and countless newspaper headlines.3 It was also a prominent theme of the 2024 presidential race. When then-candidate J.D. Vance was asked during the vice-presidential debate the (for him, difficult) question of whether Donald Trump had lost the 2020 election, he answered by pivoting to what he argued was the more important issue going forward—“Did Kamala Harris censor Americans from speaking their mind in the wake of the 2020 COVID situation?”4
Vance’s seeming non sequitur was not entirely surprising. Outrage at the heavy-handed way that social media platforms moderated COVID-19 related content also informed President-elect Donald Trump’s political rhetoric during his campaign5 and now looks as if it will be a core plank of his administration’s policy agenda. As of this writing, President-elect Trump is set to name a number of people to high-level roles in his administration who have been vocal in decrying what they have cast as an unprecedented censorship campaign by social media platforms during the COVID-19 pandemic.6 Trump’s pick to lead the Department of Health and Human Services, Robert F. Kennedy, Jr., has himself spread falsehoods and conspiracy theories on a wide variety of health-related issues, most prominently vaccines,7 and has been party to multiple (unsuccessful) lawsuits alleging that platforms’ restrictions on such posts violate his and others’ First Amendment rights.8 Trump’s proposed nominee to head the National Institutes for Health, Dr. Jay Bhattacharya, rose to prominence as a critic of the public-health response to the pandemic9 and went all the way to the Supreme Court to seek redress for restrictions placed on his social media accounts when he posted views contrary to the medical establishment.10 Meanwhile, Trump’s favored Chair of the Federal Communications Commission has vowed to take on “the censorship cartel” of tech companies who “silenced Americans for doing nothing more than exercising their First Amendment rights.”11 The list goes on.12 Backlash against the moderation of medical misinformation is thus almost certain to remain a key theme in our politics for the foreseeable future.
The regulation of medical misinformation was not always so politically loaded. At the start of the COVID-19 pandemic, platforms were widely praised for more aggressively moderating false information about the new and scary virus and were lauded for finally becoming more responsible custodians of the public sphere.13 Now, less than half a decade later, platforms are trying to distance themselves from this approach. At least one social media platform has prominently rolled back its policies against COVID-19 misinformation,14 and another has said that during the pandemic it “made some choices that . . . [it] wouldn’t make today.”15 Platforms clearly believe that policing medical misinformation is not as politically palatable as it was only a few years ago. This did not happen by accident. The politicization of platforms’ decisions has been a concerted project—one that has effectively built on arguments about the repressiveness of the public-health response to the pandemic and fears that social media platforms are biased against conservative viewpoints.16
But it would be a mistake to dismiss all this rhetoric as just politics. Criticism of platforms’ moderation of medical misinformation has resonated because it reflects deeper and more universal concerns about the power and responsibility of social media platforms in the modern information economy. In drawing lines between what content is or is not allowed on their sites, platforms wield an enormous amount of power over public debate. This raises questions about how and why the exercise of such power is legitimate or in the public interest—debates that have been raging for years now.17 It initially seemed like the pandemic made these questions easy to answer, at least with respect to medical misinformation: the public-health emergency justified platforms intervening to protect people from physical harm.18 But questions about the legitimacy of platform power over public debate are never easy to answer, it turns out—even, or perhaps especially, in a public-health emergency. And as platforms’ definition of the kinds of health-related claims they were willing to police expanded over the course of the pandemic to include claims that might be more properly understood as political claims related to health, or that had only an attenuated relationship to direct physical harm, platforms had a harder time defending their decisions. Bowing to political and public pressure (this time, from the opposite side of the political spectrum), the catch-all phrase “medical misinformation” became a vehicle for amorphous anxieties about any false speech related to COVID-19. Calls for platforms to do more often skipped over the important questions about whether platforms had the legitimacy to make the kinds of interventions being demanded.
These questions persist and still demand an answer. In what follows, I argue that instead of platforms’ approach to content moderation blowing in the political winds between overzealousness and repudiation, we should articulate an affirmative vision of the role that platforms can and should (and cannot and should not) play as gatekeepers of medical speech. In articulating that vision, First Amendment cases have something to teach us. Courts, after all, have also grappled with the difficult question of how to balance free-speech values with the value of facilitating access to knowledge. As this body of law suggests, falsity alone cannot and should not be enough to justify suppressing speech in public discourse—even in the context of health claims. Instead, intervention can be justified only where there is a clear relationship to specific and concrete harm or, as in the case of the medical profession’s self-regulation, in the context of particular relationships of vulnerability. Understanding how First Amendment law defines these relationships helps illuminate why overly aggressive moderation by platforms—which occupy a very different sociological role than medical professionals—not only will be ineffective but may even be counterproductive (as current political events suggest).
Indeed, widespread skepticism about platforms’ choices seems here to stay, and it will necessarily impact how responsible platforms should moderate content moving forward. The project of creating a healthy speech environment is not simply a question of determining ideal speech rules in a vacuum. As First Amendment doctrine recognizes, context matters, and the appropriate approach to speech regulation depends on particular sociological facts and relationships. The political environment—especially the institutional legitimacy and perceived trustworthiness of the decision maker—matters enormously to whether speech rules are accepted and effective. The problem is therefore cyclical: platforms overstepping their role during the pandemic opened them up to the critique that they were intervening in politics, a narrative that was then exploited for partisan gain, which in turn undermined users’ trust in platforms’ moderation decisions. That is, the politicization and delegitimization of platforms as trustworthy gatekeepers of medical truth not only makes them less enthusiastic about taking up that role, but also less effective at doing so.
The politicization of medical truth is a problem not only for platforms but for public health more broadly. Public trust and confidence in scientists declined sharply during the COVID-19 pandemic.19 Platforms cannot simply content-moderate this problem away. Instead, we should be much more specific and cautious about the role we ask platforms to play in policing public debate. Debates about platforms’ moderation of medical misinformation are currently largely polarized around two diametrically opposed views: they need to do much more, or they should not do anything at all. This Essay charts a different path forward. Part I tells a brief history of platforms’ content moderation during the pandemic, their struggle to define the “medical misinformation” that they should suppress, and the political consequences that followed. Part II turns to First Amendment doctrine—not because platforms are required to follow it, but because it holds important lessons for the struggle of how to regulate medical truth and falsity. Part III then describes what these lessons mean for platforms—and for all of us, in our expectations of what content moderation can and should do. Medical misinformation is a pressing public-health challenge, but it is also now a political one. When trust in institutions is low, speech suppression is more likely to breed suspicion than inspire confidence. There is no content-moderation shortcut to the hard work of public education and trust building.
I. content moderation and the covid-19 pandemic
As the Supreme Court confirmed just last Term in Moody v. NetChoice,20 the First Amendment largely protects platforms’ decisions about what speech to allow on their services.21 That is, platforms have enormous discretion to pick whatever content-moderation rules they want. In their early years, while most platforms prohibited certain categories of content to make their services more pleasant and palatable for their users (and advertisers), they generally refused to take down content simply because it was false, even when it came to content about medical topics.22 Sustained political pressure and public criticism about antivaccination content did eventually lead some platforms to take steps to reduce the circulation of such content on their services, but they mostly stopped short of outright removal of such claims.23 Social media companies should not, platform executives insisted, be “arbiters of truth.”24
The COVID-19 pandemic, and fears about platforms’ role in spreading the harmful misinformation that accompanied it, brought about dramatic reversals in policy, basically overnight.25 As authorities warned of an “infodemic” of misinformation, social media platforms positioned themselves as part of the frontline response to harmful false claims.26 Suddenly, many platforms became willing to remove certain COVID-19 related content that they judged to be false, breaking with their previous approach to policing medical misinformation and false claims more generally.
But these platforms made clear that their willingness to take down false speech about COVID-19 did not signal a more general willingness to start removing “misinformation” writ large. Instead, this newfound appetite to arbitrate the truth was limited to health-related claims in the context of a public-health emergency. Such content was, platforms insisted, simply different, for two reasons: first, false claims about COVID-19 were more likely to lead directly to physical harm;27 and, second, the truth or falsity of such claims was more susceptible to verification by widely accepted, “authoritative” sources of information, such as the World Health Organization (WHO) and the Centers for Disease Control and Prevention (CDC).28 In other words, platforms justified taking down health misinformation on the basis that it was not merely a matter of politics or opinion, but instead a matter of fact and expertise. These kinds of claims were, in the words of Mark Zuckerberg at the time, “just in a different class” than other kinds of misinformation, because of the “imminent risk of danger” and the fact that “it’s easier to set policies that are a little more black and white.”29
The praise for these moves was swift and widespread—a noticeable change in tune from the years of persistent public criticism of platforms that preceded it. “Facebook [i]s [m]ore [t]rustworthy [t]han the President,” one headline declared.30 “Has the Coronavirus [k]illed the Techlash?” mused another.31 But the honeymoon period did not last long. The neat distinction platforms drew between health misinformation and other kinds of false claims quickly broke down, for several reasons.
First, identifying good and bad information in the context of a public-health emergency caused by a novel virus was (predictably) more difficult than platforms acknowledged. As is to be expected in such circumstances, the guidance from public-health authorities was fluid, constantly changing, and sometimes contradictory.32 To take one of the most infamous examples, during the early stages of the pandemic, institutions such as WHO and the U.S. Surgeon General told the public that masks were not necessary.33 A few months later, mask mandates were widespread.34 In trying to keep up with this shifting guidance, platforms that had initially banned ads for masks on their services had to reverse their policies.35 The earlier insistence that “it’s easier to set policies that are a little more black and white” became harder to maintain. Because platforms (like the public-health authorities themselves) did not effectively communicate their reasons for these reversals, such backtracking simply became proof to critics that attempts to stamp out false claims come with the inherent risk of stamping out valuable true claims.36
Second, these dynamics were both intentionally exploited and unintentionally exacerbated as claims about the virus itself became highly politicized, undermining any easy separation between the political and medical spheres. Most prominently, President Donald Trump consistently made false claims about the pandemic, including promoting ineffective (or dangerous) “miracle cures” and politicizing individual precautionary measures like mask wearing.37 Expressing agreement or disagreement with such claims by political figures became as much a proxy for support for that candidate as a signal of belief or disbelief in the underlying medical claim.38 As a result, platforms found themselves in the position of appearing to take sides in a political debate when they moderated these kinds of claims. Predictably, platforms attracted criticism both when they left political figures’ false claims up39 and when they took them down.40
Third, the opaque relationships that major platforms had with government actors compounded fears of political bias. These platforms worked with federal officials at the White House, the Office of the Surgeon General, and the CDC to give and receive information about what people were saying on social media and to discuss how to combat misinformation.41 But communication to the public about these relationships with government actors was muddy at best and obfuscatory at worst. Platforms did not have a clear message about how they were drawing a line between “deferring to authoritative sources” in determining the facts about COVID-19 and allowing the government to call the shots. This in turn exposed them to criticism that they were suppressing both valuable information and critiques of the government’s response to the pandemic, all at the behest of the officials they were working with.42 Platforms insisted that their content-moderation decisions were always made independently,43 but the lack of transparency into government-platform communications raised the specter that these relationships in fact allowed the government to make exactly the kind of speech-related decisions that the First Amendment places off limits.44
Fourth, despite early praise for platforms’ efforts, pressure for platforms to do more, more, more about the problem of medical misinformation remained unrelenting, leading platforms to expand what they were willing to take down. Platforms bowed to this pressure, no doubt in part because they did not want to be responsible for harm, but also likely in part to avoid political costs. Whatever the reason, many platforms started to expand the category of “medical misinformation” they were willing to police beyond its originally limited contours, undermining the reasons they had pointed to for treating COVID-19 misinformation differently than other kinds of misinformation. Platforms had initially justified their extraordinary moderation in this context on the basis of a direct link between COVID-19 misinformation and physical harm. They therefore focused on removing medical claims that led people to act in ways that put them at serious direct risk of physical harm—such as advocating ingesting a harmful “cure” or exposing themselves to the virus—in situations where the recipient did not have the capacity to properly evaluate that risk. But politicians did not have such a limited definition of the kinds of content platforms should remove.45 As a result, the vague term “medical misinformation” came to encompass a far broader range of false claims about COVID-19, including unverified claims about the origins and nature of the virus and its spread that had a much more attenuated relationship to physical harm. For example, some platforms removed claims that 5G technology networks were responsible for the virus’s rapid spread.46 Others removed posts that suggested that the virus was manufactured.47 Beliefs in such false claims may well lead people to act in misguided ways—vandalizing cell towers that they believed transmitted harmful 5G radio waves, for example48—but the same might be said to be true of many forms of false (or, indeed, true) speech. A person that sets fire to a cell tower knows that they are causing property damage. But someone who ingests a substance under the assumption that it will provide a cure but instead suffers physical harm is very differently situated. The direct link to physical harm that platforms had pointed to as justifying their willingness to moderate COVID-19 misinformation simply did not apply to the expanded category of claims that platforms started to remove.
To make matters worse, platforms defined the category of COVID-19 misinformation somewhat differently from one another. Speech that was not allowed on some platforms (the lab-leak theory of the virus’ origin on Facebook, for example) might be permitted on another (like Twitter or YouTube).49 Rather than alleviating concerns about platform censorship, however, this only intensified them. On the one hand, public discussion about content moderation often did not distinguish between platforms’ approaches. The fact that Meta removed the lab-leak theory while Twitter did not, for example, did not prevent people from using it as an example of platforms’ overreach in general.50 On the other hand, the fact that platforms came to different conclusions about the best approach underlined the subjective nature of the judgments that they had insisted were made on the basis of authoritative guidance. As a result, the mission creep in the expanding definition of “medical misinformation” gave further oxygen to criticisms that platforms were not just removing claims on the basis that they that were harmful to people’s health, but going further to please government critics or to shore up the legitimacy and authority of the public-health institutions they were working with.51
Debates about content moderation flattened other important nuances, too. Content moderation at the scale of online social media platforms always involves mistakes, including both false positives and false negatives—and platforms’ enforcement of their medical misinformation policies was no different.52 Indeed, platforms had warned that they would make more enforcement errors during the pandemic because they were forced to rely more heavily on automated moderation with reduced human oversight.53 Such mistakes, however, were easy to exploit for those seeking proof of platforms’ intentional censorship. Similarly, platforms tried to strike a balance between speech and safety by using measures short of taking content down, such as adding fact-checking labels or preventing certain posts from being algorithmically amplified.54 Such measures certainly have an impact on the reach of content, but do not suppress speech in the same way as outright removals of posts. Nonetheless, these measures, too, were decried as censorship.
All of these dynamics meant platforms’ content-moderation practices became a central battleground in the emerging culture wars.55 Politicians on the left continued to lament the inadequacy of platforms’ efforts—a frustrated President Joe Biden accused social media platforms of “killing people” by allowing vaccine misinformation to spread.56 He later walked back this claim,57 but the fact that he made it in the first place only demonstrates how inflammatory political rhetoric about platforms’ level of responsibility had become by that point in the pandemic. Meanwhile, conservative politicians thought platforms were doing far too much—they decried social media “censorship” and expanded on years-long allegations (repeatedly debunked58) that platforms were biased against conservative viewpoints.59
So today, a mere few years after platforms first embraced their role as curators of medical truth, platforms’ belief that there is a category of false claims called “medical misinformation” that they can safely moderate without becoming embroiled in politics looks far too optimistic. What the past few years have made clear is that, one way or another, all misinformation is or can become political misinformation, too.
It is possible that there was no good answer for platforms caught in this bind. With the benefit of hindsight, and the knowledge of all the political weaponization of platforms’ actions that followed, perhaps it is too easy to argue that platforms erred. At the time, public discourse was dominated by fears that platforms’ failure to remove false claims led to significant excess deaths. “Free speech” concerns might have seemed overly abstract in the context of such a pressing public-health emergency.60 But this is precisely why it is important to reflect now on the missteps that were made—because calls for speech suppression are often loudest and hardest to resist in moments of crisis. Understanding why and how that happened is the only way to learn from the experience for the future.
In distilling lessons from this experience, we can turn to existing frameworks. Platforms are not the first speech regulators to wrestle with the competing equities involved in the dissemination of medical knowledge. First Amendment doctrine has also grappled with this tension—between acknowledging that there is such a thing as medical expertise that people need to be able to rely upon, on the one hand, and recognizing the dangers of giving any authority the power to punish dissent, on the other. What the First Amendment cases teach us is that “medical misinformation” is not a special category to which the ordinary reasons for caution about speech suppression do not apply. While there are certain circumstances in which false medical claims can be punished, these are defined by very particular harms or vulnerabilities, rather than by the mere fact of falsity. The next Part explores those lessons.
II. medical misinformation and the first amendment
There are all sorts of reasons why platforms will not and should not adopt First Amendment standards in writing their content-moderation rules, not least that user and advertiser preferences mean that such rules will be very bad for business.61 Nevertheless, the core principles underpinning First Amendment doctrine hold important insights about the difficult project of speech regulation more generally—insights that also can be relevant to the private systems of speech regulation that platforms create when they engage in content moderation. And indeed, in writing and applying their content-moderation policies, platforms have been heavily influenced by the First Amendment tradition.62
One of the most obvious ways in which platforms have been influenced by First Amendment law is in their early reticence to remove false claims.63 It is a central principle of First Amendment law that, generally speaking, the government cannot punish people for saying things that are wrong. In one of the most famous sentences in the First Amendment canon, Justice Holmes declared that “the best test of truth is the power of the thought to get itself accepted in the competition of the market.”64 This sentence, and the marketplace-of-ideas metaphor that it gave rise to, embodies the idea that law should not seek to fix the line between truth and falsity, and that collective knowledge is instead better advanced through the rough and tumble of public discourse and debate. Therefore, in the words of Justice Brandeis, the remedy to false speech “is more speech, not enforced silence.”65
This theory has profoundly influenced First Amendment doctrine.66 The Supreme Court echoed Justices Holmes and Brandeis nearly a century later, in United States v. Alvarez, when it rejected the argument that false speech is presumptively unprotected by the First Amendment.67 In striking down a law that made it a crime to lie about receiving the Congressional Medal of Honor, Justice Kennedy’s opinion for the Court argued that upholding such a law would “endorse government authority to compile a list of subjects about which false statements are punishable.”68 Invoking George Orwell’s dystopian novel 1984, Justice Kennedy concluded that such a power is inconsistent with a free society and that “[o]ur constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth.”69
Alvarez is the Court’s most recent and most important opinion on the regulation of lies, and it makes clear that even false speech is protected by the First Amendment. As a general proposition, this conclusion is hard to object to—a right to free speech would mean little if the government could silence people simply by declaring that they said something untrue. After all, the point of guaranteeing a right to free speech is to allow citizens to say things even when the government does not want them to. But this general principle only goes so far, because all systems of free expression also recognize that the principle of free speech has limits. Certainly, when it comes to false speech, it would be a caricature to suggest that the First Amendment prohibits all government punishment of lies and misinformation.70 As Alvarez itself recognizes, the government punishes lies all the time. In his majority opinion, Justice Kennedy acknowledged—and disclaimed the constitutional vulnerability of—laws that prohibit false statements to government officials, the impersonation of government officials, perjury, “defamation, fraud, or [cases involving] some other legally cognizable harm associated with a false statement.”71
The lesson of Alvarez, then, is one that platforms echoed in their statements early in the pandemic: lies may be punishable in certain circumstances, but there are good reasons to define those circumstances narrowly. Platforms, as nonstate actors, need not be constrained by the especially high bar that the First Amendment demands regulators meet before lies can be punishable.72 But what Alvarez rightly makes clear is that there are no subject categories (like “medical misinformation”) that are per se exempt from the general disciplining requirement to show that punishable lies must be narrowly defined and directly linked to cognizable harm. This is because, whatever the subject matter, prohibitions on false speech are vulnerable to being used to advance political aims or suppress criticism.
The Supreme Court’s decision last Term in Murthy v. Missouri is a clear illustration of courts’ reluctance to apply different First Amendment rules to the regulation of false speech depending on its falsity.73 The case involved sprawling allegations that during the 2020 election season and the COVID-19 pandemic, members of the Biden Administration had unconstitutionally pressured (or “jawboned”74) social media platforms into removing what the government officials thought was election and medical misinformation from their services.75 The Supreme Court ultimately dismissed the case for lack of standing,76 but only after courts at every prior stage of the case—from the lower courts to oral argument at the Supreme Court—had implicitly rejected the idea that the particular expertise of the government actor, or the kind of speech they were targeting, was relevant to the legal analysis.77 Instead, they invoked the same doctrinal principles, regardless of whether they were discussing government pressure to remove election-related misinformation and foreign-influence campaigns or COVID-19 misinformation. No one suggested, for example, that the CDC deserved greater deference or latitude in its communications with platforms than the FBI because of the CDC’s medical expertise or the health-related (rather than election-related) nature of their remit.
The refusal to treat different kinds of false speech differently, depending on their subject matter, is a very good thing, insofar as it means that government actors cannot aggrandize power simply by reframing the topic of the speech they seek to regulate. A recent example from Florida provides a stark illustration of how such power could be abused. The Floridian Department of Health wrote to television stations demanding they not run political ads in support of a constitutional amendment that would protect abortion access in the state. It argued that the ads were a “sanitary nuisance” because they contained false claims and “would likely have a detrimental effect on the lives and health of pregnant women in Florida.”78 As the district court acknowledged in concluding that the Department violated the First Amendment rights of the organization that wanted to run the ads, “if the State can re-brand rank viewpoint discriminatory suppression of political speech as a ‘sanitary nuisance,’ then any political viewpoint with which the State disagrees is fair game for censorship.”79 As this suggests, speech about medical topics and political speech are not clearly distinct categories. For this reason, claims of medical misinformation can also be a powerful tool of government censorship. And indeed, during the pandemic, governments around the world used public-health concerns to justify imposing speech laws that suppressed criticisms of those governments.80
That said, the First Amendment does recognize that medical speech is not always like other speech.81 While health-related speech is treated like any other speech when uttered in public discourse, there are special contexts in which the First Amendment does permit greater regulation. But rather than justifying further intervention by platforms, the rationales for these special rules underline the reasons why platforms have struggled to gain legitimacy as regulators of medical claims.
First, false or misleading health claims made in the context of speech attempting to sell a particular product receive no constitutional protection.82 This kind of deception is regulated by, for example, the Food and Drug Administration and the Federal Trade Commission. However, punishing lies in these contexts is just a subset of the First Amendment’s commercial-speech doctrine, rather than a recognition of any particular characteristics of health-related speech. That doctrine rests on the particular reliance that consumers have on sellers to speak honestly about their products and the knowledge asymmetry inherent in that relationship.83
Second, and more significantly, doctors can be subject to regulation based on what they say in certain contexts, even though such regulation would be presumptively unconstitutional if applied to other speakers. Medical-licensing laws act as a prior restraint on doctors’ speech by allowing them to provide medical advice to patients only after they have received a license from the state, and doctors that give bad advice can be sanctioned on the basis of the content of what they say.84 Indeed, “[w]ithout so much as a nod to the First Amendment, doctors are routinely held liable for malpractice” because they say the wrong thing, or fail to say the right thing.85
The fact that doctors can be punished for providing incorrect information to patients is in obvious tension with the rest of First Amendment doctrine that generally prohibits punishing people for saying things that are wrong or false. For this reason, the limits of this kind of quasi-governmental regulation of falsity are narrowly defined.86 While doctors can be sanctioned for what they say “in the course of professional practice”—which is to say, in the context of a doctor-patient relationship87—they retain their First Amendment rights as citizens when engaging in public debate and when talking to the public at large.88 This results in what Claudia Haupt has aptly named the “Dr. Oz paradox”—the fact that “the law sanction[s] giving bad advice to one patient, while it permits giving bad advice to millions of YouTube or television viewers.”89 (Coincidentally, Dr. Oz has also been tapped to join the next Trump administration.90)
This is a paradox because it is so counterintuitive. We might think that the law should recognize the greater capacity for harm arising from false advice that is widely distributed rather than given to a single person. But this gets it backwards. First Amendment doctrine has been reluctant to hinge the constitutionality of speech regulation in the public sphere writ large on governmental or judicial assessment that speech may be generally bad or harmful if people are persuaded to act upon it in the future,91 in no small part because of the historical track record of the government getting that assessment startlingly and dangerously wrong.92 Thus, it is the regulation of the speech of medical professionals directed toward individual patients that is the exception to the general rule, and only because it happens outside of the bounds of regular public debate. This disparate treatment of doctors’ speech is doctrinally justified by the unique characteristics and context of the communicative act between a doctor and their patient: the particular relationship of vulnerability that a patient has with a doctor; the knowledge asymmetry between them; the doctor’s obligation not to advance their own interests but to act in the best interests of the patient; and the doctor’s responsibility as a representative and conduit of the insights of the broader medical community.93
The last point is crucial—the content-based standards that demarcate the line for liability are generally not fixed by the government, but by the profession itself.94 In medical-malpractice suits, for example, a doctor is measured against the norms established by the medical profession, rather than some externally imposed standard.95 As Haupt has explained, it is this existence of a learned profession—a “knowledge community” on whose behalf the individual doctor speaks in providing professional services—that supplies the theoretical justification for the imposition of liability on those who do not provide care in accordance with that community’s standard of care.96 Thus, decision makers imposing liability for such speech do not do so on the basis of its falseness per se, but on its failure to convey accurately the expert consensus they purport to represent.
While this kind of speech regulation is essential to the preservation of trusted professions, it represents a narrow exception to the general principle that the government cannot sanction speech on the basis of disagreement with its content. The pandemic made clear just how narrow this exception could be. In 2022, California passed a law making it unprofessional conduct for a medical professional to “disseminate misinformation or disinformation related to COVID-19, including false or misleading information regarding the nature and risks of the virus, its prevention and treatment; and the development, safety, and effectiveness of COVID-19 vaccines.”97 The law defined “misinformation” to include only treatment or advice about COVID-19 given to a patient under the physician’s care that was “contradicted by contemporary scientific consensus contrary to the standard of care.”98
When Governor Gavin Newsom signed the bill into law, he implicitly acknowledged the constitutional minefield created by efforts to regulate misinformation by insisting that this particular bill posed no problems because it was “narrowly tailored” to “egregious instances” and did not apply to “any speech outside of discussions directly related to COVID-19 treatment within a direct physician patient relationship.”99 Therefore, he argued, while there may be legitimate reasons to be concerned about “the chilling effect other potential laws may have on [medical professionals],” this bill was different because the definition of misinformation was narrow and the law did not apply to public discourse.100 One federal district court agreed, holding that the law was a permissible regulation of professional conduct.101 Four weeks later, however, another federal district court found the law to be unconstitutionally vague, because “drawing a line between what is true and what is settled by scientific consensus is difficult, if not impossible.”102 The court noted that this was all the more true in the context of COVID-19, “a disease that scientists have only been studying for a few years, and about which scientific conclusions have been hotly contested. COVID-19 is a quickly evolving area of science that in many aspects eludes consensus.”103 Amidst continued litigation and controversy about the law’s constitutionality and chilling effects, California repealed the law in 2023.104
The California law was at once extremely narrow and problematically broad. It reached only medical advice given to an individual patient that was contrary to the standard of care. It is thus not even clear that the law would have enabled sanctions beyond existing restrictions on unprofessional conduct (although the law’s poor drafting made this ambiguous).105 But by invoking the politicized notion of “misinformation” and giving it a broad and vague definition, the law raised fears of government overreach.106 Even within the narrow bounds of a doctor-patient relationship, there is no such thing as “medical misinformation” writ large that the government could constitutionally target.
The California experience reflects the increasing reticence to treat even the speech of medical professionals as exempt from ordinary First Amendment rules prohibiting government regulation of falsity. The Supreme Court has encouraged this trend in recent years, in particular in its 2018 decision in National Institute of Family & Life Advocates (NIFLA) v. Becerra.107 NIFLA made clear that the scope for regulation of physicians’ speech should be understood very narrowly lest it interfere with ordinary public discourse. NIFLA involved another California law, which required so-called “crisis pregnancy centers” to provide patients notices about state-provided free or low-cost reproductive health services and required unlicensed clinics to notify patients that they were not licensed.108 In invalidating the law, Justice Thomas’s opinion for the Court dismissed the argument that the state had more leeway to regulate the speech of individuals in licensed professions more generally.109 Such a rule would be dangerous, Justice Thomas insisted, because “[p]rofessionals [including doctors and nurses] might have a host of good-faith disagreements” and so (citing Justice Holmes in Abrams) “the best test of truth is the power of the thought to get itself accepted in the competition of the market.”110 In Justice Thomas’s view, then, medical knowledge is, for First Amendment purposes, like any other knowledge, and the best form of regulation is the marketplace of ideas.111
Robert Post has remarked on the “breathtaking inanity of Justice Thomas’s invocation of the ‘marketplace of ideas’ in the context of professional speech.”112 While there are contexts in which scholarly debate and the robust exchange of professional views is important, “[t]he law constructs such professional relationships to protect the reliance interests of patients and clients. It does not construct such relationships to embody the value of caveat emptor, as does the marketplace of ideas.”113 It certainly seems somewhat fantastical to suggest that the remedy for false speech in a doctor’s exam room is simply “more speech.”
But acknowledging the particular context in which speech that is medical care takes place does not mean, and has never meant, that the medical profession cannot be wrong or that medical care cannot be politicized.114 In this domain, as in any other, the power to declare orthodoxy and to punish dissent can be abused and can impede the advancement of knowledge. Scientific progress necessarily requires the contestation and displacement of prevailing wisdom. There must be some competition of ideas, in other words, and the power to constrain that competition should be bestowed with great caution.
The case law contains many examples of the dangers of allowing the government to interfere with physicians’ speech, given the potential politicization of medical care. The law at issue in NIFLA sought to facilitate access to reproductive health care, but in other states the government has used “informed consent” statutes to obstruct such access. Pennsylvania, for example, created a mandate that before performing an abortion, a doctor must inform the patient that there are state-published materials available that “describ[e] the fetus and provid[e] information about medical assistance for childbirth, information about child support from the father, and a list of agencies which provide adoption and other services as alternatives to abortion.”115 Meanwhile, Idaho sought to prevent medical providers in the state from providing patients with information about abortion services in other states, arguing that such information would not be protected speech but simply professional conduct.116 And while California sought to punish doctors who knowingly gave their patients false information, other states introduced laws attempting to prevent state medical boards from disciplining doctors for spreading false information during the COVID-19 pandemic.117 These few examples show that even this narrow exception to the general First Amendment rule against content-based regulation in the context of doctor-patient relationships is susceptible to politicization and governmental abuse—and that there are good reasons for keeping the exception narrow.
As these examples also suggest, despite the polarization of debates about medical misinformation in recent years, First Amendment protections seeking to keep the government out of regulating medical truth do not necessarily have a particular political valence. For example, in 2021 Democratic Senator Amy Klobuchar introduced a bill seeking to create a carveout from platforms’ current statutory immunity for user-generated content for “health misinformation,” where “health misinformation” would be in part defined by the Secretary of Health and Human Services.118 Senator Klobuchar repeatedly denounced platforms’ failure to take adequate action against false claims on their services, and repeatedly pointed to their failure to remove accounts belonging to the so-called “Disinformation Dozen” as proof that they were not doing enough.119 One of the Disinformation Dozen was, it turns out, Robert F. Kennedy, Jr.—Trump’s nominee for Secretary of Health and Human Services—whose views often conflict with the medical establishment but who would have been empowered under the terms of Klobuchar’s bill to define “health misinformation.” It is for this exact reason that Klobuchar’s bill would have been unconstitutional—because the bounds of public discourse and the meaning of “misinformation” should not be so susceptible to being fixed by a political actor.
Klobuchar’s bill never became law, however. And indeed, the result of the prevailing doctrinal architecture during the pandemic was that very few people were legally sanctioned for spreading medical misinformation. In part because of the limits that the First Amendment imposes on their power, government officials pressured platforms to do what they could not themselves do—remove medical misinformation from the public sphere. But this caused a different problem, because while platforms may have the technical means and legal latitude to police speech on their services,120 they did not have the expertise or legitimacy to do so.
III. platforms are not knowledge communities
What the preceding potted summary of the First Amendment’s treatment of false or controversial medical claims suggests is that the legitimacy of suppressing speech depends on either a very direct and specific link to harm, or the existence of particular kinds of power inequalities and vulnerabilities. As this Part explains, it was the absence of these legitimating conditions that made platforms’ expanded content moderation efforts during COVID-19 so vulnerable to politicization. This does not mean that platforms have no role to play as gatekeepers. Some platforms have responded to the shifting political winds by publicly denouncing the expansiveness of their prior efforts, with little further reflection or explanation of what their approach would be going forward.121 But blanket renunciation of efforts to moderate medical misinformation in the face of political blowback suffers from the same problem that the aggressive policing of COVID-related misinformation did: it represents a political, rather than a principled, response to a public-health problem.
Instead, what is needed is an affirmative vision for the role of platforms in policing medical claims. This role may be more limited than the one platforms assumed during the pandemic, but it surely cannot be a complete abdication of responsibility. The path out of the current hyper-politicization requires more than just the admission of error, but also a full accounting of what happened, what worked, and what did not. As the only institutions with full access to the data showing what happened on their platforms during the pandemic, the social media companies are the only ones that can make this happen. Unfortunately, there are few incentives for platforms to facilitate this nuanced and realistic conversation. But this Part optimistically outlines what that conversation could look like, regardless.
To begin with, and to state what should be obvious, platforms do not occupy the same sociological position as the medical profession when it comes to the regulation of health-related information. Platforms’ authority to regulate speech comes from their own status as private actors, rather than any special claim to expertise. Justice Kagan’s opinion for the majority in Moody v. NetChoice expressly affirmed, as a passing example, a platform’s right to “disfavor posts because they . . . discourage the use of vaccines.”122 But the Court was also clear that platforms would be protected if they made the opposite choice—lawmakers could not interfere with platforms’ choices regardless of whether “the speech environment [lawmakers seek to create] is [better or] worse than the ones to which the major platforms aspire on their main feeds.”123 Thus, Moody is a decision in the same tradition as Alvarez. It affirms that platforms’ content-moderation decisions are protected (like lies), not necessarily because they are valuable, good, or right, but because allowing the government to decide what or how speech should be disseminated is “a worse proposal” than leaving it to the private marketplace of ideas.124
This view of platforms as intermediaries is very different from how courts conceive of the role of the medical profession when it comes to the regulation of doctors’ speech. As Part II explained, the First Amendment allows (and trusts) state-sanctioned professional organizations to impose certain limited content-based restrictions on doctors’ speech because of the specific characteristics of the speech and the professional community: namely, the fact that it is “individualized . . . , tied to a body of disciplinary knowledge from which it gains authority,” and “occurs within a social relationship that is defined by knowledge asymmetry . . . and trust in the accuracy of that advice.”125 These features do not describe platforms’ relationship with the speech on their services, with their users, or with broader society. Platforms have no special duty of care to their users, notwithstanding arguments that they should.126 Nor do platforms have the authority of a knowledge community behind them when they write their rules.
Platforms dealt with this expertise deficit during the pandemic by insisting that they do not moderate speech on the basis of what they think is true or false but instead rely on the guidance of public-health authorities. Platforms still did not want to be “arbiters of truth,” in other words. But the result was that platforms had no vocabulary or method for justifying decisions that departed from official guidance (for example, when medical consensus seems to have outpaced CDC guidance, as with masking guidelines), or for mediating between conflicting advice provided by different authorities. Professional institutions have shared methodologies and hierarchies of knowledge to guide their decision-making in such contexts, but platforms do not.127 For all of these reasons, their interventions in discourse about COVID-19 lacked the legitimacy of the medical authorities that they sought to invoke.
At the same time, it remains true that online health misinformation is a serious public-health concern that costs lives.128 Appreciating the particular role that social media platforms play in society does not mean they should throw up their hands—but it counsels caution and precision in what, exactly, platforms are asked to do. Assuming more content removal is always better, as some politicians appear to,129 ignores the risks and downsides involved in overly aggressive content moderation. Creating an affirmative vision for the role platforms should play requires reckoning with these risks, too.
The most obvious risk is that platforms, relying on authorities, will get it wrong and remove what is, with the benefit of hindsight, valuable information.130 How heavily one weighs this risk will vary based on one’s confidence in the judgment of particular medical authorities. But the risk of false positives is not the only risk created when platforms overstep. There are other downsides that would continue to exist, even if the false-positive problem could be reduced to zero.
A second, underappreciated risk arises from the function of platforms as dynamic expressive spaces and their role in trust formation. Social media platforms are a very particular kind of online space. They are not, for example, the same as online encyclopedias or medical reference sites. Instead, they are social spaces—people look to platforms to facilitate their relationships, rather than to provide them with particular information. This does not mean they are not important spaces of political and public discourse and education. Indeed, social media platforms are a significant source of news for many.131 As the Supreme Court has recognized, they are among the “most important places” for the public to “celebrate some views, to protest others, or simply to learn and inquire.”132 But they are also fundamentally messy spaces that contain a wide spectrum of human experience and cannot be made overly orderly. They facilitate democratic culture by allowing “ordinary people to participate freely in the spread of ideas and in the creation of meanings that, in turn, help constitute them as persons.”133 This culture depends on people having access to these expressive spaces to engage in such public discourse, even when they are wrong. This process can be especially important in the context of a crisis. As crisis-informatics researcher Kate Starbird has explained, during times of stress, people “search for, disseminate, and synthesize the content [they] see into narratives” in a process called “collective sensemaking” that is “critical for [] decision-making, and in many cases allows us to relieve some of the anxiety and uncertainty we face in order to take action.”134
This messiness is important not only for people’s autonomy interest in self-expression, but also for the production of durable and trustworthy information. The production and dissemination of knowledge depends on openness and participation. And crucially, the robustness of truth depends on its ability to withstand dissent. The reliability of expert knowledge in fact depends on the ability of people to challenge prevailing wisdom, and for expert consensus nevertheless to remain unchanged.135 This plays the important function of not only stress-testing expert consensus, but also fostering trust in that consensus because of its ability to be transparently contested. As sociologist Zeynep Tufekci has explained, “Misinformation is not something that can be overcome solely by spelling out facts just the right way. Defeating it requires earning and keeping the public’s trust.”136 And this trust can be undermined, rather than fostered, by overly blunt content removals. As social media researcher and former pro-vaccine activist Renee DiResta has argued, “Social-media takedowns are not the right approach to addressing [misinformation like coronavirus-related scaremongering] because they turn the propaganda into forbidden knowledge, often increasing the demand.”137 Content-moderating away the supply of false claims does not make the demand for it disappear—indeed, it can have the opposite effect.
Thus, the idea that there is a linear relationship between more moderation of false claims and greater access and trust in truthful information relies on an overly static view of the information environment and a misconception of the social function of these expressive spaces. The opposite may often be true—people may develop distrust and suspicion in an environment they perceive as slanting the playing field. This is a problem not only because it undermines people’s belief in trusted medical knowledge, but because it may undermine the legitimacy of core democratic processes. As Robert Post has put it:
In a democratic society, the revenge of the repressed can be a terrible thing. In dealing with the undoubted problem of misinformation, we must negotiate between the Scylla of widely circulating falsehoods and the Charybdis of the loss of democratic participation. Under conditions of polarization, suppression that is experienced as illegitimate can easily lead to an existential opposition between friends and enemies that would undermine the very possibility of democratic politics.138
These risks do not, however, mean that platforms can or should do nothing.
First, there is clearly still a role for content moderation, even if more limited than the approach platforms took during the pandemic. In limited circumstances where a particular post might have a direct relationship to serious physical harm, platforms might justifiably intervene. Posts advising people to use a harmful or ineffective cure for a deadly disease, for example, might be the kind of post a platform should remove, but—as Meta’s own Oversight Board suggested—perhaps not in a context where the particular cure is not available and the post is made as part of political debate about whether it should be.139 Meanwhile, claims that are less directly related to physical harm (for example, that COVID-19 is spread faster due to 5G networks or potentially false claims about the origin of the virus) should be left up. And for the same reason that First Amendment doctrine holds that false commercial speech is unprotected, platforms should have more stringent rules as to what they allow to be advertised on their sites. Free-speech concerns do not entirely disappear in the context of ads, but the role of platforms is very different when they are facilitating advertising as compared with when they are facilitating social interaction. They have a more legitimate claim to imposing their own standards on what kinds of economic transactions they facilitate.
Second, platforms can take other steps to increase the trust in and legitimacy of their content-moderation efforts. Platforms should implement structural separations between those charged with enforcing content-moderation rules and those interacting with government officials.140 This will not eliminate concerns about illegitimate government influence, but it will mitigate them and help bolster trust that platforms are moderating fairly and consistently, rather than at the behest of political actors.
Finally, platforms should enable robust research into the online information ecosystem and what kinds of content-moderation interventions might be effective. Platforms sit on a wealth of data about people’s information diets, how content flows online, and what kinds of platform interventions are effective, but a lack of access to the data for researchers or the public means that these insights—or the raw data that could lead to them—are not available to people outside the platforms themselves.141 During the pandemic, platforms conducted what was in essence a worldwide social experiment about the effects of content-moderation interventions—from removing more content, to labeling false claims, to elevating authoritative information142—but years later there has been little transparency about the nature and extent of these interventions (beyond some high-level statistics),143 let alone their impacts. Platforms’ toolbox for responding to medical misinformation includes far more than simply deciding to take down speech or leave it up, as they showed during the pandemic, but some of these measures will be more effective than others. When Meta’s own Oversight Board recommended that Meta conduct a review of the measures it took during the pandemic and publicly release the findings, Meta demurred and said that it thought its “resources are best deployed to prepare for future questions and crises, rather than attempt to conduct a broad and complex assessment based on a limited data set.”144 In a changed political environment, there is little appetite to talk about and learn from the measures that platforms took during the pandemic. But limited outside access to data has hampered any rigorous and independent analysis of the extraordinary measures taken during this time of crisis.145
Such research can result in counterintuitive findings. Early literature on social media platforms focused on fears of echo chambers and filter bubbles in which algorithms fed them an information diet that simply reinforced their existing views,146 but there is now a growing consensus that such fears have been overblown.147 Similarly, “[w]here once it was a common concern that retractions may backfire and people may believe even more in the misinformation after the correction is presented, recent research has found this phenomenon to be rare.”148 As these few examples show, the science on the best way to facilitate trust and belief in factual information is evolving—and will continue to do so as the online environment itself changes—but people’s intuitions about the problems and their solutions are not always correct.
In better moments, public discourse about content moderation of misinformation acknowledges all these nuances. When Vivek H. Murthy, the Surgeon General of the United States (and one not shy about his concerns about the health impacts of social media149) issued an advisory on “Confronting Health Misinformation,” perhaps cognizant of First Amendment concerns with government demands for more censorship, he stopped short of calling for platforms to remove more content.150 Instead, he called for platforms to take a number of other measures, including giving researchers access to data to analyze the spread of misinformation, addressing information deficits, amplifying communications from trusted messengers, and protecting health professionals from online harassment.151
These kinds of careful claims do not make good soundbites, however, and political rhetoric often paints a much simpler (and, ironically, misleading) picture—a picture that is based on widely shared confusion about the role that platforms can and should play as intermediaries and gatekeepers. The politicization and backlash against platforms’ content moderation during the pandemic is a predictable consequence of asking platforms to play a role they are not cut out for. It suggests that pushing for even more heavy-handed responses to medical misinformation may result in short-term wins (i.e., fewer false claims on particular services), but may undermine the longer-term goal of facilitating the processes of trust building and truth formation.
Conclusion
Debates about content moderation have become so polarized and politicized that they admit no nuance. There are important lessons to be learned from the experience of moderating medical misinformation during the COVID-19 pandemic, but the political noise has drowned out measured conversation about what happened and what should happen going forward. Contrary to one side of the debate, platforms do have a role to play in the regulation of false information about COVID-19 and other health issues. But that role is, and has to be, a far more limited one than many on the other side of the political debate envision. It can be true both that online medical misinformation is a serious public-health problem and that excessive content moderation will exacerbate, rather than cure, the underlying disease of distrust.
Assistant Professor, Stanford Law School. Many thanks to Ros Dixon, Claudia Haupt, Genevieve Lakier, Michelle Mello, and Yoel Roth for generous comments on earlier drafts, and to the wonderful editors of the Yale Law Journal for their care and hard work.