Platform Realism, Informational Inequality, and Section 230 Reform
abstract. Online companies bear few duties under law to tend to the discrimination that they facilitate or the disinformation that they deliver. Consumers and members of historically marginalized groups are accordingly the likeliest to be harmed. These companies should bear the same, if not more, responsibility to guard against such inequalities.
Introduction
As much as social-media companies have reconnected college roommates and spread awareness about movements like #BLM and #MeToo,1 they also have contributed to the dysfunction of the current online information environment. They have helped cultivate bigotry,2 discrimination,3 and disinformation about highly consequential social facts.4 Worse still, they have distributed and delivered such material without bearing the burden of anticipating or attending to their social harms and costs.
These and other online companies remain free of any such legal obligation because of courts’ broad interpretation of Section 230 of the Communications Decency Act (CDA), which immunizes “interactive computer services” that host or remove content posted by third parties.5 Congress enacted Section 230 in 1996 to limit minors’ exposure to pornography, as well as to encourage free expression, self-regulation, and innovation online.6 Courts have read the provision broadly, generally dismissing complaints (before discovery) in which plaintiffs allege that the defendant service has published unlawful material—or unlawfully removed material from its platform. Courts will only allow a case to proceed when the defendant “contributes materially” to the offending content.7
In this way, Section 230 has created a very strong incentive for creative entrepreneurs to build or promote interactive computer services that host user-generated content. Novel social-media companies like Myspace, Foursquare, and Friendster sprang up in the years immediately following its enactment.8 These services enabled their users to communicate with each other freely, about almost anything. While community sites and online bulletin boards existed before Congress enacted the CDA in 1996, Section 230 gave a distinctive boost to the online applications and services for user-generated content that most people today associate with the internet.9
But that was a long time ago. Today, the most popular social-media companies do much more than serve as simple platforms for users’ free expression and innovation. The most prominent online services are shrewd enterprises whose main commercial objective is to collect and leverage user engagement for advertisers. Yet, until very recently, courts have allowed these companies to avoid public scrutiny because of the liability shield under Section 230.
It is past time for reform. I have elsewhere argued that courts should more closely scrutinize online intermediaries’ designs on user-generated content and data.10 And courts have begun to do so. They now are far more attentive to the ways in which internet platforms’ prevailing ad-based business models and specific application design features necessarily facilitate harmful content and conduct, online and offline.11 Courts have also been far more alert to whether defendant intermediaries are acting as publishers or something else, as in the cases in which plaintiffs have successfully alleged that online retailers and homesharing services are not “publishers” within the meaning of Section 230.12
But a court-centered approach to reforming Section 230 will not suffice. Courts cannot legislate, and in light of Section 230’s plain language, statutory reform is probably the most effective and direct way to update the doctrine.13 After all, Congress crafted Section 230 to protect interactive computer services that host third-party content. It is therefore up to Congress to adapt its aims to the current state of affairs. Consumers, but especially members of vulnerable and historically marginalized groups, have the most to gain from revamping Section 230 to require all online intermediaries to mitigate the anticipated impacts of their services. Reform is urgently needed because online service designs produce outcomes that conflict with hard-fought but settled consumer-protection and civil-rights laws.
This Essay sets out the reasons why now is the moment for statutory reform. Although this change would not ameliorate all of the social and economic ills for which intermediaries are responsible, it would have the salutary effect of ensuring that companies abide more closely to public-law norms and civic obligations. That is what we expect from actors in other sectors of the economy. Companies that have an outsized influence on public life should at least be held to the same standards—if not stricter ones.
This Essay proceeds in four parts. Part I describes the current social-media market and, in the process, argues that the role of social-media services in hosting and distributing third-party content is incident to their primary objective of holding the attention of their users and collecting their data for advertisers. While this account has been made before, it is important for arguments in the following Parts to illuminate how online companies operate in a manner at odds with the how they and their defenders often describe themselves. Part II outlines the incentives and positive theory for the prevailing laissez-faire approach to content moderation and the legal doctrine that has given rise to the current state of affairs. In consideration of the market imperative to hold consumer attention, the protections under Section 230 doctrine have in fact set out a perverse disincentive to moderate. Part III outlines how courts have started to see online intermediaries, especially the biggest actors, for what they are, impacting the ways in which they make sense of whether a defendant is a “publisher” under Section 230. Finally, in Part IV, I return to a theme about which I have written elsewhere: the ways in which the robust protection under Section 230 has entrenched and, in some cases, deepened inequality in information markets. Free markets might redound to the benefit of consumers, but, as in other legislative fields, patterns of exclusion and subordination proliferate in the absence of legal rules against disparate impacts.
I. platform realism
Not too long ago, many of the most popular social-media companies loudly proclaimed to be champions of authentic voice, free expression, and human connection.14 Their pronouncements often resembled marketing slogans and branding strategies. But they also reflected an earnest and widely held belief—that internet companies help people discover ideas and acquaintances in ways that legacy media companies in print, radio, television, and cable had not and could not.15
Today, most people are at best resigned to, and at worst weary of, their experiences with social media.16 The same internet companies that proclaimed themselves to be champions of free expression just a few years ago have since backtracked. To be sure, they continue to promote themselves as platforms that “give people the power to build community and bring the world closer.”17 But their recent moderation decisions and policies suggest a far more cautious approach. They had no choice. The pressure they received from politicians, advertisers, consumers, and public interest groups has forced them to more aggressively curtail the most corrosive and objectionable material that they host. These efforts have been especially urgent for popular social-media companies, including Facebook and Twitter, that purport to build communities and foster discussion.18
But social-media companies generally have little concern for the nature of the communities or discussions that they host. That is because they are not mere platforms for authentic voice, free expression, and human connection. In fact, social media’s ability to host and distribute third-party content—and thus to connect people and build communities—is incident to its ravenous ambition to hold the attention of their consumers and collect their data for advertisers.19
The January 6, 2021 siege on the Capitol building demonstrated that social-media companies can even mobilize seething reactionary mobs. There is little doubt that Twitter and Facebook helped to widely spread former President Trump’s spurious claims about the 2020 presidential election, as well as a variety of other assertions and posts that seemed to violate their content guidelines, in the months and years before.20 Their belated decisions to suspend his accounts made this fact plain as day. But other prominent social-media companies were also complicit, including Parler,21 Reddit,22 and YouTube.23 They facilitated and galvanized the groups that bore down on Washington, D.C. to invade the Capitol. They fostered racist and xenophobic white nationalist online communities, provided forums for coordination among those groups, and helped to distribute information about their plans.24 What followed was only a matter of time. The former President’s rhetoric and mendacity lit the match, but online services provided the kindling.
Of course, these companies are not in cahoots with the reactionaries who promoted the attack on the Capitol. Things are more complicated. No matter the platform, demagogues and clever companies exploit a variety of biases for political gain and convince unwitting consumers to do things they might not otherwise do.25 Still, the big social-media companies’ main objective is to hold consumers’ attention for advertisers irrespective of these miscreants’ aims. The companies’ stated goal of fostering community is an incident of their pecuniary imperative to optimize consumer engagement. This objective is not unqualified; social-media companies have long-term incentives to deemphasize content that offends the majority of their consumers, so that consumers will continue using their platforms.26 But those long-term incentives have not been strong enough to curtail the distribution of hateful, violent, and debasing user content and advertisements. Thus, alongside clips of lawyers inadvertently talking through cat filters and clever dance sequences, newsfeeds and recommendations are also filled with baseless headlines about crackpot conspiracies, bigoted calls for violence, and advertisements for far-right militia merchandise—despite policies that ban “militia content.”27
All of this has disillusioned many, if not most, consumers and policy officials. Advertisers have noticed. Companies generally do not want to associate their brands with toxic and divisive content.28 This is why the most popular internet companies today seem to have shifted away from being beacons of free speech.29 Some internet companies have even called for increased government oversight or regulation.30
For several years, legal scholars and social scientists have been recommending creative design tweaks that are more varied than the familiar but unsatisfyingly binary “keep-up versus take-down” framework.31 Internet companies have been listening; today, they are taking demonstrable steps to tamp down their most toxic and alarming content through creative adjustments to the designs of their user interfaces.32 The biggest social-media companies are introducing friction into the ways in which their consumers share and engage content.33 Twitter and Facebook, for example, started flagging dubious political ads and claims by high-ranking elected officials a couple of years ago.34 They do this by placing visual and textual “content labels” alongside suspect user-generated posts in order to inform consumers about misleading or harmful content.35 Other notable design tweaks include “circuit breakers” that limit the amplification or viral spread of toxic content.36 Twitter, for example, recently started sending users warnings before they post anything that its automated content-review systems anticipate as being potentially harmful or offensive.37 Research has shown that “frictive prompts” like these may curb people’s impulse to post.38 Twitter also recently made changes to the ways in which it crops the photographs that its users post.39 Before Twitter made those changes, research suggested that its “saliency algorithm” featured images of white people more than those of Black people and focused on women’s chests or legs rather than their other physical attributes.40
These tweaks are a sign that social-media companies are alert to consumers’ distaste for certain kinds of content and content-distribution methods. But these changes elide far more pressing problems. As I explain below in Section II.B and Part III, under current law, companies may still distribute harmful or illegal content even if they control the ways in which they deliver that material.41 Under Section 230, Congress created a safe harbor to encourage online companies to host and moderate third-party content, “unfettered” by government regulation.42 This, according to the courts, was the drafters’ unrestrictive approach to encouraging innovation and content regulation, both at once.43
In this way, through Section 230, Congress promulgated a court-administered innovation policy that aimed to promote a certain kind of online business design—platforms for user-generated content. But legislators in 1996 could not anticipate how shielding this form of “interactive computer service” would beget companies whose main objective would be to optimize consumer engagement for advertisers, unencumbered by the social costs and harms that online content and expressive conduct imposes on others.44 The design tweaks I enumerate above may arise from the market imperative to keep consumer demand, but it also bumps up against a far more compelling market incentive to hold and quantify consumer attention for advertisers. Social-media companies, wedded to the extraordinary amounts of ad revenue that they generate, are today in no position to redress this incentive structure. The current information environment is proof enough.
II. market for moderation
Only Congress, the architect of Section 230’s regulatory scheme, is capable of reforming the prevailing incentive structure motivating social-media companies. In Section II.A below, I sketch out the argument for the current laissez-faire approach, before turning to reasons for legislative or regulatory reform in Section II.B.
A. The Laissez-Faire Logic for Online Platforms
Market pressures evidently affect the ways in which social-media companies choose to distribute content. This presents a challenge for people who believe that legislation, regulation, civil litigation, or criminal enforcement (and the threat of their occurrence) affect internet companies.45 After all, due to the expansive protection courts afford platforms under Section 230,46 few articulated legal rules prefigure how intermediaries may distribute or moderate content. To the extent any exist, they influence (but do not resolve) how intermediaries may distribute online content that violates criminal law,47 intellectual-property law,48 telephone consumer-privacy law,49 and sex-trafficking laws.50 Yet, despite the absence of such constraints, companies appear to be taking the initiative by, among other things, deplatforming demagogues and fashioning new ways to tamp down the distribution of disinformation.51
This is why advocates of the current regulatory regime, as spare as it is, stand on good ground when they defend the status quo. They tend to subscribe to a beguiling classical conception of free markets.52 For them, the unregulated market for “interactive computer services” has promoted experimentation, innovation, learning, and discovery in ways that could never be possible were the law more heavy-handed about restricting content. Freedom may have its costs, they allow, but those are the incidents of progress and learning. Even the most quotidian and frivolous of online exchanges could be valuable.53 Law should not chill free authentic democratic deliberation, such as it is.
Other proponents of the status quo laissez-faire approach might recognize that regulation would be necessary if the market for online services was not filled with variety. But, as there are none of the same trappings of scarcity online as there are in other industries, they might argue, regulation is not necessary. Users enjoy an abundance of options for content and services. The means of production are different, too. Indeed, the barriers to entry for content creators are low: almost anyone can publish anything on some platform. Or they can create their own substack, Medium account, or webpage. And the market for online content moderation responds to consumer demand. Users who want a heavily curated and moderated online experience can patronize a variety of familiar content producers. On the one hand, prominent producers of online content like Amazon, Netflix, Peacock, and Disney Plus offer heavily curated entertainment that features their content and excludes much content of other companies. Consumers can also find or subscribe to matching and recommendation services in specialized areas: everything from music sharing sites like SoundCloud or BandCamp, to health technology sharing sites for doctors like Doximity, to sites that facilitate the buying and selling of unregistered firearms like Armslist. Consumers may also find services that restrict harmful content, such as services that forbid sexually explicit material like Instagram. Other consumers will prefer services that are ostensibly far more permissive like Parler. Still others will look for services that host the most alarming and offensive content, including websites like Gab that distribute material that is racist, misogynist, white-nationalist, and antisemitic.54
This is the unfettered market for online content and services. Proponents of the laissez-faireapproach contend that, as vibrant as the information ecosystem is, neither legislatures nor regulators should intervene; the free market for content moderation and recommendation is robust in ways that, for them, is normatively desirable and consistent with prevailing First Amendment doctrine.55
The positive case for the laissez-faire approach resonates
with an emerging view that companies, especially internet companies, have a
constitutional right to decide which ideas to distribute or promote and which
ideas to demote or block.56
The strongest version of this view conceives of almost all information flows,
even overtly commercial ones, as presumptively protected communicative acts
under the First Amendment—notwithstanding the fact that in other contexts,
overtly commercial speech is afforded less protection than other expressive
acts.57 Scholars have labeled this
emergent view the “New Lochner” because of the ways in which courts have
applied the strong constitutional interest in free speech to shield commercial
activities that historically have not been protected.58 These writers invoke Lochner
v. New York,59 a Supreme Court case
notorious for its grotesquely expansive view of freedom to contract that
overrode Congress’s interest in minimum-wage and maximum-hours labor
legislation.60 Some of the more
recognizable contemporary artifacts of the New Lochner are the Supreme Court’s decisions on campaign-finance
regulation (such as Citizens United v.
FEC,61 in which the Court
invalidated limits on contributions to issue advertising) and targeted
marketing (such as Sorrell v. IMS Health
Inc.,62 in which the Court struck
down limits on promotional campaigns by pharmaceutical companies).63
Against this backdrop, proponents of the current regime warn
about the unintended consequences of regulation. Social-media companies have
been demonstrably creative, innovative, and prolific; they have transformed the
internet into a vast bazaar of goods for everyone. They fear that legal
oversight will have a chilling effect on innovation and expression.64 Developers anxious about
attracting legal trouble will be less creative and adventurous about pursuing
untested business models and novel content, effectively entrenching the power
of the biggest companies.65
They also posit that it might backfire against vulnerable groups and minorities
that espouse unpopular
views.66
This could have the affect of silencing social
movements for reform, including, for example, #BlackLivesMatter and #MeToo.
Some even worry that entrepreneurs in this country would lose their competitive
edge if the United States imposes legal constraints on what internet companies
can develop or sell.67
B. The Disincentive to Moderate
But none of this means that social-media companies are unaffected by law. Even though there is no positive law regulating content moderation, internet companies have been free to develop moderation standards because of the protection under Section 230. For over two decades, the courts have concluded that, pursuant to Section 230(c)(1),68 online intermediaries are not liable for the unlawful material that their users create or develop.69 Nor, under Section 230(c)(2), are companies legally responsible for their decisions to remove or block third-party content or make filtering technology available.70 Congress concluded that if such companies were held liable for any of these activities, the free flow of ideas and information would slow and stall.71 Courts accordingly have shielded intermediaries from liability to the extent those companies provide platforms for third-party content, no matter how heinous the material is.72
This protection goes beyond the protections that the First Amendment provides. That is, Section 230 shields defendants from liability for third-party content that falls outside the scope of First Amendment protections, including defamation or commercial speech.73 Evidently, Congress’s aim was to ensure that all views and ideas, even abhorrent ones, are exposed and debated in the marketplace of ideas.
Courts have read Section 230 to bar third-party
liability suits against intermediaries, even when the companies know that their
services will distribute unlawful or illicit content.74 This is a departure from
traditional publisher liability rules in common-law tort, as well as the
general principles of third-party liability across legislative fields.75 Under the first of the protections—Section 230(c)(1)—plaintiffs
must successfully establish that an intermediary has “contribute[d] materially”
to the development of the content in order for such suits to proceed to
discovery, let alone succeed on the
merits.76 Pursuant to the second
protection—Section 230(c)(2)(A)77—an online service’s
decision to takedown or block third-party content is not actionable if that
decision is voluntary and in “good faith.”78 The third, far less
litigated protection shields intermediaries who make filtering technology
available.79
In its foundational interpretation of the first of those provisions almost a quarter century ago, a Fourth Circuit panel concluded in Zeran v. America Online, Inc. that “[t]he specter of tort liability in an area of such prolific speech would have an obvious chilling effect” because of the “staggering” amount of third-party content that flows through their servers, some of which is surely unlawful or harmful.80 The panel accordingly decided to read Section 230’s protections broadly, but without specifying which of the three protections among Section 230(c)(1), 230(c)(2)(A), or 230(c)(2)(B) it was interpreting.81 Otherwise, the Fourth Circuit reasoned, online companies would likely choose “to severely restrict the number and type of messages posted.”82 Under that logic, the doctrine should be broadly protective and generous “to avoid any such restrictive effect.”83 Moreover, the court inferred a congressional belief that intermediaries would have it in their commercial self-interest to regulate content to keep their consumers happy; consumer demand would be regulation enough and, in any case, a far better judge of which content ought to be allowed.84
Federal and state courts across the country have since adopted this reasoning.85 Almost two decades later, in a case involving an online service that notoriously facilitated sex trafficking of minors, the First Circuit elaborated that this “hands-off approach is fully consistent with Congress’s avowed desire to permit the continued development of the internet with minimal regulatory interference.”86
But as understood by the courts, Congress did more than simply set out a legal protection for “interactive computer service[s].”87 It privileged services that host and distribute user-generated content in particular by removing all affirmative duties under law to monitor, moderate, or block illegal content.88 Legislators, of course, did not invent online forums like these. Electronic bulletin boards, newsgroups, and similar online communities proliferated in the years before legislators enacted Section 230. Indeed, Congress intervened in 1996 because a trial-level state court decision in New York assigned secondary liability to an online service that distributed defamatory statements by one of their users.89 Section 230 overturned that decision.90 Through its legislation, Congress signaled a policy preference for a certain kind of user-focused service design.91
Silicon Valley responded almost immediately. Investors and internet entrepreneurs eagerly started developing services that feature “user-generated content” with the knowledge that they would not be held legally responsible for any of it.92 Put differently, the protection under Section 230(c) and the Zeran rule that soon followed established a new disincentive for companies to create, develop, or showcase their own content. It is no surprise, then, that emergent companies at this early stage shied away from content production and instead created services for “user-generated content” without fear of legal exposure,93 even if they knew or could reasonably anticipate that their consumers would use the new services to do harm. Congress and the courts, in short, have created the statutory equivalent of an invisibility cloak for services that feature (but do not contribute to) third-party content.94
Ever since, a ready-made populist ideology has supplied more normative heft to this legal fiction, going beyond the laissez-faire justifications set out above. It posits that, with the internet, consumers no longer have to abide by the unilateral designs and service terms of powerful legacy media companies and retailers. The internet will empower consumers to be the architects of their own online experiences.95 Social-media companies are the apotheosis in this framing. They are the necessary outgrowth of the legal protection that Congress created.
So, in spite of its title, the CDA discourages companies to act with decency. Under prevailing judicial interpretations of Section 230, companies are free to leave up or take down unlawful or harmful content as they please. This is a policy that, in practice, disincentivizes moderation and incentivizes the distribution of third-party material. It explains, at least in part, Silicon Valley companies’ desire to hold consumer attention and collect consumer data. In this way, the laissez-faire policy approach set out by Congress and elaborated by the courts has become a perversion of the statute’s titular objective. It is to this to which I turn next in Part III below.
III. commercial designs and perverse incentives
Today, social-media companies do much more than simply host or distribute user-generated content. They solicit, sort, deliver, and amplify content that holds consumer attention for advertisers.96
Most companies’ targeted-content delivery systems are not as
sophisticated as those of large and powerful companies like Facebook and
YouTube.97 But they, too, fashion
their sites with advertisers in mind.98 The Experience Project, for
example, was a website that innocuously aimed to make connections between
anonymous users based on the information that users entered into a
straightforward query box.99
As with most popular online services today, automated decision-making systems
were essential to building community groups on the site. Typing something as
simple as “I like dogs” or “I believe in the paranormal” would be enough for
the service to make a
connection.100 It would also send users
an email notification whenever other users on the site responded to related
inquiries.101 The company generated
income through advertisements, donations, and the sale of tokens that users
could spend to communicate with others in their groups.102
As the Experience Project’s userbase grew, so too did the range of community groups that emerged. The company shuttered its service in 2016103 because, as commentators in this field like to observe, “moderation at scale” is difficult if not impossible.104 The Experience Project claimed it had to go offline because of the “bad apples” that were flocking to the site.105 And by “bad apples,” it was referring to a sexual predator who had used the site to entrap underage victims and a murderer who killed a woman he met through its services.106
Before it closed, the Experience Project’s automated system sent a notification to Wesley Greer, who was using the site to meet people and find heroin.107 Greer found what he was looking for—and more, as it turns out. He died of fentanyl poisoning after unknowingly purchasing heroin laced with fentanyl from another user.108 Greer’s mother sued the Experience Project for wrongful death. She argued that he would not have obtained the drugs that killed him without its services. But she could not prevail in court. Her claim failed because the Experience Project was immune under Section 230 from legal liability. According to the court, the service merely connected users to likeminded people and communities they sought out.109 Greer’s mother could not even proceed to an initial hearing on the question of whether the Experience Project’s service design somehow contributed to the death of her son.
As complicit as social-media sites may seem, courts have barred cases where plaintiffs allege legal fault for design features like anonymity,110 notifications,111 recommendations,112 and location tracking.113 They have held that those functions only help to deliver the user-generated content that the intermediary receives and, as such, do not rise to the level of “material contribution.”114 They have also repeatedly refused to impose duties on an “interactive computers service” to monitor for malicious use of their service or implement safety measures to protect against known informational harms.115
In many regards, this legal regime is upside down. By way of comparison, Greer’s experience with the Experience Project resembles that of the young adults who have jumped off “the Vessel,” a “spiraling staircase” in Manhattan’s Hudson Yards with waist-high guardrails that whimsically climb sixteen stories into the air.116 The developers closed access to visitors after a third young person committed suicide by leaping from the structure.117 There can be little doubt that, like the Vessel, certain design features for distribution, delivery, and amplification of information can sometimes be predictably dangerous, even if ostensibly innocuous.
Over the past couple of years, courts have become more scrutinizing of the ways in which interactive computer-service design impacts online behavior.118 Consider the Force v. Facebook, Inc. case, decided in the Second Circuit.119 There, plaintiffs had argued that Facebook materially supported terrorism by making friend recommendations and supporting online groups.120 The panel rejected that argument, holding that Facebook could not be sued for enabling foreign terrorists to meet and collaborate in violation of federal antiterrorism laws.121 The case is notable, however, because of then-Chief Judge Katzmann’s separate concurring and dissenting opinion in which he argued that “the CDA does not protect Facebook’s friend- and content-suggestion algorithms.”122 In Chief Judge Katzmann’s reading, neither the statutory text nor the stated purposes of the statute supported the view that an intermediary gets immunity when it showcases user-generated data or content.123 He would have held that Facebook’s recommendations should not count as “publishing” under Section 230(c)(1) because Facebook is, first, communicating its own views about who among its users should be friends and, second, creating “real-world (if digital) connections” with demonstrably real-world consequences.124
Chief Judge Katzmann’s separate opinion in Force marks an important inflection point in the evolution of the doctrine. Over the past couple of years in particular, courts have started to look far more carefully at the ways in which the designs of interactive computer services cause informational harm.125 Chief Judge Katzmann’s opinion cites HomeAway.com v. City of Santa Monica,126 one of a handful of cases concerning municipal ordinances that impose nonpublishing duties on online homesharing services to report or register short-term rentals with public officials. In cases from Boston to New York to San Francisco, federal courts have rejected the Section 230 defense on the view that Section 230 “does not mandate ‘a but-for test that would provide immunity . . . solely because a cause of action would not otherwise have accrued but for the third-party content.’”127 An interactive computer service may, at once, distribute third-party content without fear of liability, but also be subject to legal duties surrounding how it designs and provides its systems to consumers. Thus, the courts have determined that homesharing sites, for example, are not free to ignore whether their guests and hosts are lawfully registered under local housing or hotelier laws.128
Amazon, too, has been on the losing end in federal- and state-court litigation in which plaintiffs have alleged that the retail behemoth is a seller (subject to products liability for product defects) rather than a mere publisher of information from third-party manufacturers.129 Federal and state courts across the country have been taken by the way Amazon controls the marketing, pricing, delivery logistics, and general political economy of online consumer retail purchasing. Even if, in any given case, Amazon may not have a duty to warn or monitor third-party products, the courts have generally concluded that the work that it does behind the scenes is not “publishing.”130 It is, rather, a seller—for the purposes of product-liability law, at least. This is to say that, in the eyes of most courts, Amazon is a market actor deeply embedded in the marketing and pricing of consumer products, even if not their manufacturing. An exception, however, arises in Texas, where the court of last resort held that Amazon is not a “seller” under state law if the originating manufacturer “do[es] not relinquish title to [its] products.”131
Finally, in Lemmon v. Snap, Inc., the Ninth Circuit rejected a Section 230 defense to a claim alleging wrongful death. In that case, the plaintiffs argued that Snapchat, a popular social-media app through which users share disappearing photos and videos with annotations, negligently contributed to their teenage sons’ fatal car crash.132 The Snapchat feature at issue allows users to track their land speed and share that information with friends through a “Speed Filter,” which superimposed a real-time speedometer over another image.133 One plaintiff’s son was allegedly using this filter while driving, shortly before running off the road at 113 miles per hour and ramming into a tree.134 The other plaintiff was in the passenger seat. According to the plaintiffs, Snap (the owner of Snapchat) knew that users went faster than 100 miles per hour on the mistaken belief that they would be rewarded with in-app “trophies,”135 but it did not do anything that effectively dissuaded them from doing so.136 The young occupants’ parents sued, alleging that Snap’s “Speed Filter” negligently caused their son’s death.137 Snapchat moved to dismiss, arguing that the parents’ suit sought to impose liability for publishing user content—here, the driving speed.
The Ninth Circuit rejected Snap’s Section 230 defense, reversing the lower court’s decision. The panel concluded that the plaintiffs’ claims targeted the design of the application, not Snap as a publisher.138 According to the Ninth Circuit, plaintiffs’ negligent-design claim faulted Snap solely for Snapchat’s design; the parents contended that the application’s “Speed Filter and reward system worked together to encourage users to drive at dangerous speeds.”139 It does not matter, the panel explained, that the company provides “neutral tools,” as long as plaintiffs’ allegations are not addressed to the content that users generate with those tools.140 This conclusion, echoing Chief Judge Katzmann’s partial concurrence in Force, may very well instigate creative new lawsuits that could better tailor Section 230 doctrine to our times.141
This emerging view among judges is refreshing because it suggests that courts are no longer so immediately taken by the pretense that internet companies are mere publishers or distributors of user-generated content—or that they are mere platforms that do little more than facilitate free expression and human connection. Sometimes they are. But often they are not. The sooner that policy makers dispense with the romantic story about beneficent online platforms for user-generated content and recognize their uncontestable pecuniary aims, the better.142
IV. informational inequality
The problem with prevailing Section 230 doctrine today is not only that it protects online services that amplify and deliver misleading or dangerous information by design. To be sure, this is bad enough because the entities that are most responsible for distributing and delivering illicit or dangerous content are the least likely to be held responsible for it. But the principal problems that this Part highlights are the ways in which powerful online application and service designs harm people for whom hard-fought public-law consumer protections (such as civil-rights laws or rules against unfair or deceptive trade practices) are essential. And yet, under the prevailing Section 230 doctrine, it appears that interactive computer services can never be held liable for any of the harms that they contribute to or cause.
What is worse, most courts resolve Section 230 disputes at the motion to dismiss stage, before discovery can unfold.143 They do this even as the text of the statute does not explicitly advert that intention. Courts have chosen to limit their analysis to the question of whether plaintiffs have alleged that the respective defendant interactive computer service is acting as a publisher or distributor of third-party content or is materially contributing to unlawful content.144 Courts rarely, if ever, scrutinize how deeply involved the defendant service is in creating or developing the offending content.145 In practice, then, the doctrine has effectively foreclosed any opportunity for the vast majority of plaintiffs (or the public generally) to scrutinize internet companies’ role in the alleged harm.146 This presents a substantial hurdle to holding intermediaries accountable, even if, in the end, they are not liable pursuant to plaintiffs’ legal theory of the case. This is especially true given the ways in which the most powerful internet companies today zealously resist public scrutiny of the systems that animate user experience.147
My reform proposal is simple: online intermediaries should not be immune from liability to the extent that their service designs produce outcomes that conflict with hard-won but settled legal protections for consumers—including consumer-protection and civil-rights laws and regulations. At a minimum, law in this area should be far more skeptical of online intermediaries to the extent that they know that their systems are causing informational harms and that they have the capacity to stop or prevent. And that burden should be substantial when the offending content or online conduct harms consumers and members of historically marginalized groups.148
A. Knowingly Entrenching Inequality
Informational harms spread unevenly across politically or culturally salient groups. Owing to the way in which structural inequality permeates all aspects of society, historically marginalized groups and consumers are likely to be the worst off in the absence of regulatory checks.149 Safiya Noble identified this problem a few years ago in the context of online search.150She explained that ostensibly innocuous terms return and, with each query, entrench prevailing racist and misogynist meanings. In other words, before Google rectified this problem, when someone searched for “black girls,” the top results were likely to include sexualized or debasing terms, while the top search results for “white girls” tended to be less degrading.
These same problems continue, even as Noble’s writing (and that of others) has raised awareness about the ways in which putatively neutral technologies perpetuate or entrench extant inequalities.151 These consequences could be highstakes—even life-or-death. Consider, for instance, that some social-media companies distributed information about COVID-19 safety, treatment, and vaccinations to Black people far less than to other groups during the height of the pandemic.152 Indeed, those belonging to all other racial categories saw significantly more public-health announcements from the Department of Health and Human Services and other public-health bodies.153 Consider also the ways in which political operatives sought to deflate faith in the administration of elections among Black and Brown people with misleading and false information, effectively disenfranchising those groups.154
These are direct informational harms that social media can stop and prevent. There can be little doubt about this last point. In the lead up to the 2020 election, for example, Facebook proudly announced that it would ratchet down content from putative news sources that are notorious for distributing disinformation, only to revert back to amplifying that material in the month or so after electors registered their votes.155
There are few legal remedies for the distribution of high-stakes falsehoods and informational harms like these. Of course, there are rules that forbid the distribution of false election or nutritional or ingredient information—that is, information about the time and place of an election or information about food and drugs.156 But these laws probably do not prohibit falsity in campaign material or dangerous medical advice from laypeople.157 In which case, online companies would owe no legal obligation to take these down under a reformed Section 230 doctrine.158
But there are certain kinds of informational harms for which the stakes are so high that their distribution is or should be unlawful. Current laws and judicial doctrine, for example, strictly forbid advertising that discriminates against people on the basis of protected classifications like race, gender, or age in high stakes areas like hiring, consumer finance, and housing.159 These are rules that apply with equal force to advertisers and third-party intermediaries (that are not online) as much as the companies whose services and products are being marketed.160 And yet, due to the protection for interactive computer services under Section 230, restrictive discriminatory advertising has proliferated across these sectors.161 Under the broad and prevailing interpretation of Section 230(c)(1), nothing in the law obligates these companies to take down or prevent its distribution even when they know it exists.
Online companies’ capacity to control the delivery of these unlawful kinds of content, particularly when the disparities ostensibly leave historically marginalized groups less well off, would go unaddressed were it not for intrepid journalism. More to the point, the prevailing interpretation of Section 230 permits companies like these to rest easy and proceed “unfettered,”162 even when they know that their services facilitate informational disparities. Reddit, like Backpage before it, knowingly hosts child pornography.163 Dating sites like Tinder and OKCupid do not screen for sexual predators.164 The law likely does not require them to do anything in these settings—even when they have knowledge that illegal conduct is afoot.165
This prevailing interpretation contorts Section 230’s purposes in at least two ways. First, it flips the ostensible Good Samaritan purpose of the statute upside down by removing the burden to abide by consumer-protection laws. The doctrine in this way creates a disincentive to care rather than an incentive to self-regulate, as the Zeran panel presumed.166 Second, it leaves consumers without any effective legal mechanism to mitigate or redress online informational harms. Danielle Citron, Ben Wittes, and Mary Anne Franks have proposed that intermediaries enjoy the benefit of the Good Samaritan safe harbor if they take “reasonable steps to prevent or address unlawful uses of” their service.167 This reform aims to operationalize the stated objectives of the Good Samaritan purposes of the statute by creating a functional safe harbor for companies that actually regulate third-party content. Holding companies responsible for the harmful material to take reasonable steps to takedown or block such content would help to redress some of the power imbalances at work, particularly because those companies control content distribution. Such a duty would also shift some of the costs of unlawful or harmful content on to the entities best equipped to prevent them.
Legislators could also impose a burden on the companies that know that their services cause harm. There is nothing earth shattering in this idea. After all, it has been taken as an article of faith among torts professors for over five decades now.168 As to internet companies in particular, this reform idea only echoes insights set out over a decade ago by Rebecca Tushnet in a law-review essay on the subject.169 There, just twelve years after Congress passed Section 230, she presciently argued that legislators and policy makers should be alert to the relative social costs of granting broad speech rights or immunity to intermediaries for third-party material.170 It is not at all obvious, she explained, that those protections would engender a sense of responsibility to moderate because the pecuniary aims of those companies, even then, did not necessarily align with those of users.171 Thus, she explained, courts have adjusted “the procedure, rather than the substance, of speech torts in order to balance the costs of harmful speech with the benefits of speech that is useful but vulnerable to chilling effects.”172
The law of defamation in particular offers helpful insights into how Section 230 reformers might tinker with knowledge requirements in order to serve other important public-regarding norms. Among other considerations, defamation law conditions the size and nature of penalties on intentionality as well as the respective target of the content.173 Specifically, journalistic norms of truth-seeking and verification have been important to the courts’ adjudication of defamation claims that require courts to determine whether a defendant journalist acted maliciously.174 The qualified reporter’s privilege also offers some helpful cues. There, courts explicitly consider whether, during the course of grand jury proceedings or even in a criminal trial, a subpoenaed reporter who bears the indicia of a professional journalist may decline a government request to testify.175 In the context of both defamation and the reporter’s privilege, the question of whether the defendant or witness engages in the process of truth seeking is not dispositive, but important. These doctrines protect companies’ good-faith efforts to attend to the quality or veracity of the content they publish.
It would be fully consistent with these longstanding principles to obligate companies with the knowledge of and demonstrable capacity to control the (automated) distribution, amplification, and delivery of harmful content. Two reform proposals in the current session of the House of Representatives do precisely that.176 One, for example, would exempt from Section 230 protection services that amplify online material that violates civil rights in particular.177 Another proposal would carve out civil-rights violations, as well as other informational harms to consumers and historically marginalized groups.178 This reform, moreover, would not have to require that companies affirmatively monitor for illegal or illicit content that they publish or amplify. One of the more notable bipartisan recent reform proposals in the Senate, for example, would require interactive computer services to remove any material that a court has adjudged to be unlawful within four days of receiving notice of that order.179
B. Discriminatory Restrictive Targeting
Internet companies also facilitate discrimination on their platforms when they deliver targeted advertising that is personal to each consumer. These advertisements present a different kind of harm because, most of the time, their scope is difficult for outsiders to discern. Worse, however, are the ways in which intermediaries openly enable advertisers to target audiences on the basis of protected categories like race, gender, and age (among hundreds, if not thousands, of other characteristics) in commercial campaigns for housing, employment, and consumer finance, where hard fought civil-rights laws forbid advertisements that explicitly or intentionally solicit or exclude audiences on the basis of those dimensions.180 Google reportedly allowed employers and landlords to discriminate against nonbinary and some transgender people, pledging to crack down on the practice only after being alerted to it by journalists.181 Facebook’s Ad Manager openly enables advertisers to target audiences by including or excluding people on the basis of thousands of demographic categories or proxies for those categories.182
Again, restrictive targeting is especially pernicious because victims are never the worse for knowing that they have been excluded. That is, even as many social-media companies offer individual users relatively particularized explanations for why they see any given advertisement, consumers do not learn about the content that they do not see. It takes resource-intensive analyses by intrepid researchers from outside of the company to uncover these practices. Unrelenting “data journalism” by Politico and The Markup in particular has uncovered patterns of discrimination in housing, employment, and consumer credit on Facebook’s Ad Manager.183 For example, they have revealed that the Ad Manager allows credit-card companies, housing brokers, and employers to exclude, respectively, young people, racial minorities, and women from advertisements about their services and programs.184 Facebook eventually promised to stop the practice, agreeing with civil-rights groups and plaintiffs in 2019 to bar the use of protected categories in those areas.185 But in spite of these promises, patterns of age and sex discrimination persist.186
The solutions here are straightforward. Commercial content of any kind, whether online or in the physical world, that has the direct effect of discriminating against consumers on the basis of legislatively protected characteristics (e.g., race, sex, and age) in markets where civil-rights laws forbid the practice (e.g., housing, education, employment, and consumer finance) is (or should be) forbidden.187 This reform would attend to outcomes rather than the input variables or decision-making processes on which companies rely to deliver content. This is because automated decision-making systems discover salient patterns in the combination of the most innocuous consumer variables (like food tastes and thousands of others), even when developers exclude them; together, they apparently act as a virtual proxy for protected categories.188 Recall that, in spite of the March 2019 settlement with plaintiffs in which Facebook agreed to forbid the use of protected categories in ads for housing, employment, and consumer finance, patterns of discrimination on the basis of age and gender persisted over two years after the settlement.189 Companies in these circumstances should not be able to invoke the Section 230 defense—at least not before discovery uncovers whether and how they serve advertisements to their users in ways that violate civil rights and other consumer protections.
Proponents of the current regime may protest that such reforms would unduly burden companies and chill the distribution of too much lawful third-party content. But that is a burden that the drafters and advocates of these hard-fought laws imposed in consideration of the social costs of discrimination and disparate consumer harm. The challenge is finding the right balance between innovation and speech on the one hand and equality on the other.
Today, Section 230 protections have set this balance exceptionally out of whack. The burdens that practically all other companies in practically all other legislative fields must carry should certainly be applicable to social-media companies, too, especially considering their enormous influence on public life. Of course, the reforms I propose would not eradicate racism and other consumer harms from internet platforms. Patterns of subordination pervade all public life. But if civil-rights law and other consumer protections are to be effective in today’s networked information economy, internet companies ought to be held accountable for the ways in which their services entrench inequality. Current Section 230 doctrine makes that terrifically difficult, if not impossible.
Conclusion
Powerful internet companies design their services in ways that facilitate illegal discrimination and other consumer harms. This is reason enough for reform.
This Essay’s lessons are twofold. First, most internet companies have the formidable capacity to redress these practices, but do not do so until they are called to task pursuant to a court order or an explosive news report.190 Second, even when called to task, these companies invoke Section 230 immunity before discovery can reveal whether or how they are implicated in the unlawful content or conduct. It is no surprise that the companies would proceed in this way. Courts have been relatively solicitous of online companies’ claims to the protection. It is only recently that some prominent courts and judges have evinced skepticism, in recognition of the ways that service designs necessarily create the harms that ensue.191
While I have elsewhere argued that courts should do more,192 it is time for Congress to reform the statute to comport to the current state of affairs. It can do this by amending the statute so that companies bear the legal duty to block or prevent the amplification or delivery of online content that they know to be unlawful. This is especially important for content that harms consumers and members of historically marginalized groups—that is, people who have few if any other legal avenues for redress.
Today, Section 230’s broad protections empower social-media companies to think that they are above the fray, inoculated from bearing responsibility for the services they develop. This turn away from public obligation is corrosive. The time for course correction is overdue.
Olivier Sylvain, Professor of Law, Fordham University Law School; Senior Advisor to the Chair, Federal Trade Commission (FTC). The author wrote the bulk of this piece before his appointment to the FTC. Nothing here represents the FTC’s position on any of the matters discussed. The author is grateful for excellent student research assistance from Nicholas Loh, Laura Reed, and Ryan Wilkins, as well as incredibly resourceful support from Fordham Law School librarian Janet Kearney.