Dismantling the “Black Opticon”: Privacy, Race, Equity, and Online Data-Protection Reform
abstract. African Americans online face three distinguishable but related categories of vulnerability to bias and discrimination that I dub the “Black Opticon”: discriminatory oversurveillance, discriminatory exclusion, and discriminatory predation. Escaping the Black Opticon is unlikely without acknowledgement of privacy’s unequal distribution and privacy law’s outmoded and unduly race-neutral façade. African Americans could benefit from race-conscious efforts to shape a more equitable digital public sphere through improved laws and legal institutions. This Essay critically elaborates the Black Opticon triad and considers whether the Virginia Consumer Data Protection Act (2021), the federal Data Protection Act (2021), and new resources for the Federal Trade Commission proposed in 2021 possibly meet imperatives of a race-conscious African American Online Equity Agenda, specifically designed to help dismantle the Black Opticon. The path forward requires jumping those hurdles, regulating platforms, and indeed all of the digital economy, in the interests of nondiscrimination, antiracism, and antisubordination. Toward escaping the Black Opticon’s pernicious gaze, African Americans and their allies will continue the pursuit of viable strategies for justice and equity in the digital economy.
Introduction
In the opening decades of the twenty-first century, popular online platforms rapidly transformed the world.1 Digital modalities emerged for communication, business, and research, along with shopping, entertainment, politics, and philanthropy.2 Online platforms such as Facebook, Twitter, Google, Airbnb, Uber, Amazon, Apple, and Microsoft created attractive opportunities and efficiencies.3 Today, life without those platforms is nearly unimaginable. But they come at a heavy price. Familiar platforms based in the United States collect, use, analyze, and share massive amounts of data about individuals—motivated by profit and with limited transparency or accountability.4 The social costs of diminished information privacy include racial discrimination, misinformation, and political manipulation.5 This Essay focuses on one set of social costs, one set of institutional failures, and one demographic group: diminished information privacy, inadequate data protection, and African Americans.
African Americans could greatly benefit from well-designed,
race-conscious efforts to shape a more
equitable digital public sphere through improved laws and legal institutions.
With African Americans in mind, there are several reasons for advocating for
improved laws and legal institutions. Existing civil-rights laws and
doctrines are not yet applied on a consistent basis to combat the serious
discrimination and inequality compounded by the digital economy.6
Existing common law, constitutional law, and state and federal regulations
protecting privacy—much of which predates the internet—are of limited value.7 Current federal privacy and
data-protection law—patchy, sectoral, and largely designed to implement
1970s-era visions of fair information practices8—is inadequate for digital-privacy
protection and equitable platform governance. Although the Federal Trade
Commission (FTC) exhibits concern about the special vulnerabilities of African
Americans and other communities of color,9 it has lacked the resources to
address many of the privacy-related problems created by internet platforms.10 And the platforms
themselves have failed to self-regulate in a way that meaningfully responds to
race- and equity-related privacy problems.11 The self-governance efforts and
policies adopted by these companies have not silenced criticism that platform
firms prioritize free speech, interconnectivity, and interoperability at the
expense of equitable privacy protections and antiracist
measures.12
In the United States, discussions of privacy and
data-protection law ground the case for reform in values of individual
autonomy, limited government, fairness, and trust—values that are, in theory,
appealing to all people.13
Yet, until recently, the material conditions and interests of African
Americans, particularly from their own perspectives, have received limited
attention in such discussions. As civil rights advocates observe, although “[p]rivacy should mean personal autonomy and
agency . . . commercial data practices increasingly impede
the autonomy and agency of individuals who belong to marginalized communities.”14 In pursuit of equitable data privacy, American
lawmakers should focus on the experiences of marginalized populations no less
than privileged populations. For Black Americans, those experiences
feature three compounding vulnerabilities:
(1) multiple forms of excessive
and discriminatory surveillance; (2) targeted exclusion through differential
access to online opportunities; and (3) exploitative online financial fraud and
deception. Digital-privacy and data-protection law proposals fashioned
to promote equitable governance online must be responsive to calls for improved
online governance made by and on behalf of African Americans relating to these
forms of pervasive and persistent disadvantage.
Although a great deal of state and federal privacy and data-protection law is already on the books,15 additional rules, statutes, and authorities are needed to empower the public sector to regulate how companies handle personal information.16 A new generation of privacy and data-protection laws is evolving in the United States.17 But promising state and federal initiatives require a hard look to determine whether they go far enough toward addressing the digital-era vulnerabilities of African Americans. The new generation of laws would ideally include provisions specifically geared toward combatting privacy- and data-protection-related racial inequalities enabled by online platforms. A stronger FTC, a freestanding federal data-protection agency, and updated state and federal privacy and data-protection legislation can all potentially help meet contemporary demands for more equitable online-platform governance. How much they can help depends in large part upon whether these measures are pursued boldly and specifically to address the racial-equity challenge.
In Part I of this Essay, I describe the “Black Opticon,” a term I coin to denote the complex predicament of African Americans’ vulnerabilities to varied forms of discriminatory oversurveillance, exclusion, and fraud—aspects of which are shared by other historically enslaved and subordinated groups in the United States and worldwide.18 Echoing extant critical and antiracist assessments of digital society, I reference the pervasive calls for improved data-privacy governance, using the lens of race to magnify the consequences for African Americans of what scholars label “surveillance capitalism,”19 “the darker narrative of platform capitalism”20 and “racial capitalism.”21 Privacy advocates repeatedly call for reform to improve online data protections for platform users and the general public who are affected by businesses’ data-processing practices.22 Such reforms also benefit African Americans, of course, to the extent that the interests of African Americans converge with those of the general public. I maintain, however, that generic calls on behalf of all population groups are insufficient to shield the African American community from the Black Opticon. To move from generic to race-conscious reform, I advance a specific set of policy-making imperatives—an African American Online Equity Agenda—to inform legal and institutional initiatives toward ending African Americans’ heightened vulnerability to a discriminatory digital society violative of privacy, social equality, and civil rights.23
In Part II, I consider whether new and pending U.S. data-privacy initiatives meet the reform imperatives of my African American Online Equity Agenda.24 I argue that while the Virginia Consumer Data Protection Act is flawed and too new for its full impact to be evaluated,25 several provisions that could over time reduce race discrimination by private businesses are on the right track. Because the FTC is evidently committed to using its authority to advance the interests of people of color, the U.S. House Energy and Commerce Committee recommendation to allocate a billion dollars to create a new privacy and data-protection bureau within the Commission is on point for reducing online fraud and deception targeting African Americans.26 Finally, though unlikely to be passed by Congress in the very near term, privacy legislation introduced by Senator Kristen Gillibrand in 2021 is remarkably equity conscious, setting the bar high for future federal legislation.27
I conclude that although we must welcome these major reforms and proposals for advancing online equity, privacy, and consumer-data protection, grounds for concern remain when reforms are assessed against imperatives for specifically combatting African American disadvantage. Whether and to what extent contemplated legal and institutional reforms would free African Americans from the Black Opticon remains an open question. However, the current era of privacy and data-protection reform presents a paramount opportunity to shape law and legal institutions that will better serve African Americans’ platform-governance-related interests no less than, and along with, the interests of others.
I. the black opticon: african americans’ disparate online vulnerability
African Americans dwell under the attentive eye of a Black Opticon, a threefold system of societal disadvantage comprised of discriminatory oversurveillance (the panopticon),28 exclusion (the ban-opticon),29 and predation (the con-opticon).30 This disadvantage—propelled by algorithms and machine-learning technologies that are potentially unfair and perpetuate group bias—is inimical to data privacy and an ideal of data processing that respects the data subject’s claim to human dignity and equality. Structural racism renders African Americans especially vulnerable to disparities and disadvantages online.31 Highlighting the problem of algorithmic bias, Dominique Harrison asserted that “Black and Brown people are stripped of equitable opportunities in housing, schools, loans, and employment because of biased data.”32As Harrison’s observations attest, my Black Opticon metaphor—denoting the ways Black people and their data can be visually observed and otherwise paid attention to online—encapsulates literal aspects of the urgent privacy and data-protection problem facing African Americans.
African Americans are active users of online platforms. Although roughly thirty percent of Black homes lack high-speed internet access and seventeen percent lack a home computer,33 Black Americans are well-represented among the 300 to 400 million users of Twitter and the billions of daily users of Meta (previously known as Facebook) platforms.34 African Americans accounted for nearly one-tenth of all Amazon retail spending in the United States in 2020.35 As the digital divide is closing among younger Americans,36 commentators extol a vibrant African American “cyberculture” of everyday life.37 Black people’s recent civil-rights strategies include racial-justice advocacy that is digitally mediated.38 While Black users and designers of online technology were once a faint presence in popular and academic discussions of the digital age, today, “Black digital practice has become hypervisible to . . . the world through . . . Black cultural aesthetics . . . and social media activism.”39 On a more mundane level, Black Americans turn to internet platforms for access to housing, education, business, employment, loans, government services, health services, and recreational opportunities.40
It may appear that African American platform users, like other groups of users, are not overly concerned with privacy and data protection because of their seeming readiness to give away identifiable personal information.41 While some consumers indeed undervalue privacy,42 privacy-abandonment behaviors may not signal genuine indifference to privacy for several reasons.43 Low-income African Americans may decline privacy protections, such as smartphone encryption, due to prohibitive costs of data-secure devices and services.44 Some consumers trust that Big Tech is sufficiently caring, comprehensively regulated, and responsive to closing gaps in privacy protection.45 Typical consumers experience corporate data practices as a black box.46 Terms of service and privacy policies, while available online, are lengthy, technical, and complex.47 Educated and uneducated users are poorly informed about the implications of joining a platform, and once on board a platform, stepping off to recover a semblance of control over data can sever important channels of communications and relationships.48
I believe many African Americans do care about their data privacy, and that their understanding of the many ways it is unprotected is growing. The remainder of this Part describes the three constitutive elements of the Black Opticon of African American experience: (1) discriminatory oversurveillance (the panopticon), (2) discriminatory exclusion (the ban-opticon), and (3) discriminatory predation (the con-opticon). In so doing, it illustrates how attentive eyes within American society misperceive African Americans through warped lenses of racial discrimination.
A. Discriminatory Oversurveillance
Elements of the Black Opticon have been recognized for decades. In fact, since the dawn of the computer age, wary privacy scholars have emphasized that watching and monitoring individuals with the aid of technology threatens privacy—both for its tendency to chill and control behavior, and for its potential to efficiently reveal and disseminate intimate, personal, incriminating, and sensitive information. For example, Alan Westin’s 1964 treatise, Privacy and Freedom, among the most influential law-related privacy studies of all time, noted the special susceptibility of African Americans to the panoptic threat.49 According to Westin, privacy denotes a certain claim that all people make for self-determination: “Privacy is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.”50 Distinguishing physical, psychological, and data surveillance, Westin explicitly mentioned African Americans in relation to concerns about covert physical surveillance and discrimination by segregationists within a white power structure.51 Arthur R. Miller raised political surveillance as a privacy problem facing African Americans in his landmark 1971 Assault on Privacy.52 And a more obscure early book devoted to privacy in America, Michael F. Mayer’s 1972 Rights of Privacy,53 decried the unlawful wiretapping surveillance of Martin Luther King Jr. that was revealed in the trial of Cassius Clay (Muhammed Ali).54 Mayer devoted a short chapter to the oversurveillance of the poor dependent on government housing and other benefits,55 foreshadowing Khiara M. Bridges’s work on privacy and poverty.56
Twitter, Facebook, Instagram, and nine other social-media platforms
came under attack in 2016 for providing location-analytics software
company Geofeedia with access to location data and other social-media
information57—an illustration of platform-related
oversurveillance. According to an American Civil Liberties Union report that
year, police
departments used software purchased from Geofeedia that relied on social-media
posts and facial-recognition technology to identify protesters.58 For example, following the Black Lives Matter
(BLM) protests sparked by the death of African American Freddie Gray while in
police custody, Baltimore police reportedly used Geofeedia software to track
down and arrest peaceful protesters with outstanding warrants.59 Police deliberately focused arrests
within the majority Black community of Sandtown-Winchester,
the precinct where Freddie Gray was apprehended and
killed.60Geofeedia continued to
market its services as a way to track BLM protesters at a time when the FBI was
a Geofeedia client and an FBI report indicated that a so-called “Black Identity
Extremist” movement would be a target of surveillance.61 As imprecisely defined by
the FBI, the label “Black Identity Extremist” could be affixed to an activist
who merely protested police brutality.62 Fortunately for African
Americans and all social-media users, Twitter, Facebook, and Instagram
discontinued sharing location data and social-media feeds with Geofeedia
following public backlash.63
But panoptic concerns about platforms and privacy will remain so long as
efforts to subject Black people to special levels of efficient social control
persist.64
Today, government and nongovernmental surveillance practices and technologies of all kinds disparately impact communities of color.65 The Geofeedia example demonstrates how data sharing and disclosures by digital platforms can have far-reaching inequitable consequences for African Americans. The business of providing platform users’ location data and social-media posts to third parties, including law enforcement, without express consent or transparency is aptly perceived by platform critics as violating users’ legitimate information-privacy interests.66 Data-privacy interests are implicated whenever consumer data collected or shared for one purpose is used for other purposes without consent and transparency. A panoptic threat grows as the extent, frequency, and pervasiveness of information gathering through surveillance grows. The Black Opticon exists—and has long existed—inasmuch as wrongful discrimination and bias persistently focus panoptic surveillance on African Americans, leading to oversurveillance.67 Location tracking, the related use of facial-recognition tools, and targeted surveillance of groups and protestors exercising their fundamental rights and freedoms are paramount data-privacy practices disproportionally impacting African Americans.
B. Discriminatory Exclusion
I now turn to another feature of the Black Opticon, namely, targeting Black people for exclusion from beneficial opportunities on the basis of race. Discriminatory exclusion requires obtaining information identifying a person as African American.
Such information is not hard to come by. In the 1950s, a brick-and-mortar business could obtain race information from the City Directory. In Atlanta, Georgia, for example, white-only businesses wishing to avoid soliciting business from African Americans could use race information published in the 1951 City Directory.68 The directory designated people known to be Black with a “c” for colored.69 The Directory also included information from which race might be deduced due to segregation in housing and employment, such as the entrant’s street address and employment.70 African Americans who did not wish to be discriminated against might have aspired to keep their race private. But in the old South, neither civility norms nor laws protected race information from disclosure.
Today, similar information can be used to identify a person as African American. For example, residential addresses continue to serve as racial proxies.71 And even something as basic as a person’s name can reveal their race. These racial proxies then facilitate discriminatory exclusion. For example, Dr. LaTanya Sweeney, Harvard professor and former Chief Technology Officer at the FTC, uncovered that when she typed her name into Google an advertisement for InstaCheckmate.com captioned “LaTanya Sweeney Arrested?” popped up.72 When she searched for the more ethnically ambiguous “Tanya Smith,” the arrest-association advertisement disappeared. As Sweeney’s work demonstrates, biased machine learning can lead search engines to presume that names that “sound Black” belong to those whom others should suspect and pay to investigate.73
Online businesses have the capacity to discriminate and exclude on the basis of race, just as brick-and-mortar businesses have done. Discriminatory exclusion by government and in places of public accommodation is both a civil-rights and a privacy issue. In the 1960s and 1970s, legal commentators began to frame uses of information about race to discriminate and exclude Black people from opportunity as among the nation’s information-privacy problems. For example, Miller pointed out in Assault on Privacy that psychological testing, which excluded some minorities from employment and school admissions, required test takers to respond to invasive personal questions.74 In the 1970s, when unfair credit practices could easily exclude people of color from lending opportunities, policy makers linked fair credit goals to consumer-information privacy.75 Privacy rights were understood to include the right to withhold information, to restrict sharing with third parties, and to access and correct credit-reporting information.76
The forms of racism and inequality of opportunity that policy makers recognized in the 1970s permeate today’s digital sphere, in the exclusionary practices of the sort Didier Bigo would term ban-optic.77 Discriminatory practices (i.e., those that rely on racialized sorting by humans and machines that reinforce racism and deny equal access to services and opportunities78) thrive on online platforms. Platforms have come under attack for targeted advertising that discriminates against consumers of color with respect to housing, credit, and services, for hosting racially biased advertisements, and for facilitating unequal and discriminatory access to ridesharing and vacation rentals.79
Nonconsensual and discriminatory uses of personal information, like other unauthorized use and disclosure of personal information, should be understood as information-privacy violations.80 For a time, advertisers on Facebook were able to select which Facebook users could and could not see their advertisements by race.81 The ability to target sectors of the market meant that African Americans could be excluded from commercial opportunities on the basis of their race alone. In November 2017, after Facebook claimed to have devised a system that would recognize and not post discriminatory housing advertisements, journalists at ProPublica were able to purchase housing advertisements that excluded classes protected by the Fair Housing Act, including African American users and users interested in wheelchair ramps.82 A representative from Facebook explained ProPublica’s racially discriminatory housing advertisements as a technical failure.83 In 2019, showing that some data-privacy problems can and should also be framed as civil-rights violations,84 the U.S. Department of Housing and Urban Development charged Facebook with violating the Fair Housing Act by selling advertisements that discriminate against protected classes.85 Facebook’s current policies prohibit discrimination based on race.86
C. Discriminatory Predation
Personal data of people of color are also gathered and used to induce purchases and contracts through con jobs, scams, lies, and trickery. Discriminatory predation describes the use of communities of color’s data to lure them into making exploitative agreements and purchases. This feature of the Black Opticon searches out and targets vulnerable African Americans online and offline for con-job inclusion. Predatory surveillance is the flip side of the exclusionary-surveillance coin.
Discriminatory predation makes consumer goods such as automobiles and for-profit education available, but at excessively high costs.87 Predation includes selling and marketing products that do not work,88 extending payday loans with exploitative terms,89 selling products such as magazines that are never delivered,90 and presenting illusory money-making schemes to populations desperate for ways to earn a better living.91 The FTC has gone after wrongdoers for the practice of targeting low-income individuals.92 The agency has noted that Native Americans, Latinos, African Americans, immigrants, and inmates and their families are disproportionately impacted by fraud.93 These populations are lured through false, unfair, or fraudulent online and offline advertising, marketing, and promotions for consumer goods, medical products, government services, education, employment, and business opportunities.94
Recent litigation focused on the company MyLife.com.95 MyLife.com is an online enterprise that sells profiles of individuals, marketed for purposes including housing, credit, and employment-screening decisions.96 These services are particularly important to communities of color, where limited income, weak credit, and criminal-justice histories can combine as barriers to obtaining basic necessities.97 Privacy provisions of the Fair Credit Reporting Act (FCRA)98 (along with provisions of the Restore Online Shoppers Confidence Act99 and the Telemarketing Sales Rule100) were deployed in a lawsuit brought by the FTC and Department of Justice. The suit alleged that MyLife.com violated the FCRA by “failing to maintain reasonable procedures to verify how its reports would be used, to ensure the information was accurate, and to make sure that the information it sold would be used by third parties only for legally permissible purposes.”101The suit also importantly alleged that defendant MyLife.com fraudulently enticed consumers into purchasing automatically renewing subscriptions to its services by providing them with false and unverified information about their own backgrounds and others, including criminal histories, and that MyLife.com lacked procedures for both determining the accuracy of information and providing user notices.102 Although the district court denied the defendant’s motion to dismiss103 and granted the government partial summary judgment, the court did not grant summary judgment on the FCRA privacy-related claims.104 The suit resulted in an injunction and $21 million in civil penalties.105
More enforcement lawsuits of this type, that make use of existing law and the FTC’s unfair-trade-practice authority, could help deter online predatory practices and shrink the Black Opticon. I believe that future litigation enforcing new race-conscious privacy laws enacted to address discriminatory predation and the disparate impact of data abuses on people of color could help even more.
* * *
To reiterate, Part I has illustrated the three sets of data-protection problems that comprise the Black Opticon. The Geofeedia incident, discussed in Section I.A, demonstrated panoptic problems of oversurveillance. Oversurveillance undermines African Americans’ sense of security and fair play by placing their lives under a level of scrutiny other groups rarely face, exacerbating the problem of unwarranted encounters with the criminal-justice system. Facebook’s discriminatory advertisements, discussed in Section I.B, embodied ban-optic problems of racially targeted exclusion from opportunity. Ban-optic practices online encase African Americans in a racial caste system whereby roles and opportunities are fixed by perception of race rather than need, merit, or ability.106 Finally, the MyLife.com litigation, discussed in Section I.C, illustrated the con-optic problems of targeted fraud and deception. African Americans deserve the attention of marketplace opportunity, but on the same, more favorable terms extended to other groups.
The Black Opticon pays the wrong kinds of attention to African Americans, using the resources of internet platforms and other digital technologies to gather, process, and share data about who we are, where we are, and to what we are vulnerable. In Part II, I consider how and whether changes in the design and enforcement of privacy law could help combat the Black Opticon.
II. an african american online equity agenda
This Part considers whether legal approaches premised on privacy law hold promise for African Americans seeking to escape the Black Opticon. To gauge that promise, I lay out an African American Online Equity Agenda (AAOEA) and use it to evaluate whether a new state law in Virginia, new privacy protection resources for the FTC, or a proposed new federal privacy agency embody assumptions and goals calculated to advance the interests of African Americans.
A. Escaping the Black Opticon: Paths Forward
Calls for improved platform governance flow from many sources, including from platform company leaders themselves.107 Advocates have called repeatedly for platform governance that includes privacy and data-protection law reform and industry self-governance to improve online data protections.108 Platforms have sometimes responded to episodes of intense criticism from organized groups with changes in policy and practice.109
Minority-group advocates have had some success directly pressuring industry, raising hopes for industry self-governance. For example, the advocacy group Color of Change aptly credits itself with persuading Facebook to conduct a civil-rights audit of its policies that respected white-nationalist content; persuading Google to ban predatory lending apps from Google Play to protect Black people from unreasonable terms, high default rates, and manipulation; and persuading Pinterest to stop featuring plantation wedding and party venues implicitly glorifying the heinous slave economy.110 Successful interventions spurring voluntary change responsive to panoptic, ban-optic, and con-optic threats have occurred, but I speculate that they may be more the exception than the rule, especially since smaller platforms’ abuses may fly under the radar of public-interest advocates. Voluntary self-governance to date has left African Americans vulnerable to lost privacy, data abuses, and social and economic inequity.111
Legislative reform is in the mix of proposed governance solutions as commentators vigorously debate the relative merits of law, data trusts,112 content moderation, social-media councils, platform design, and norms.113 Regimes of data-privacy law, antitrust law, intellectual-property law, constitutional law, civil-rights law, and human-rights law all bear on platform governance.114 Due to the major inadequacies of existing measures, I urge new privacy and data-protection legal measures as requirements of adequate platform governance.115 Federal privacy-law reform is urgently needed to protect the interests of all Americans, including African Americans. Recently proposed federal legislation116 and a proposed expansion of the FTC’s privacy and data-protection capacities are generally commendable,117 as is recently enacted state privacy legislation in California, Colorado, and Virginia.118 However, they must be assessed through the lens of race to determine whether they address the oversurveillance, exclusion, and scamming characteristic of the Black Opticon.
B. Generic Versus Explicitly Group-Specific Reform Guidance
To adequately confront the Black Opticon,
data-privacy reforms should explicitly address group-specific harms, not just
general harms. Existing guidance around data-privacy reform falls short of
directly addressing the pervasive problems of African Americans in the digital
economy—even when it purports to promote equity. Consider, for example, the
Civil Rights Privacy and Technology Table (CRPTT), a consortium of leading
civil-rights organizations and privacy advocates examining privacy through the
lenses of marginalized communities. The CRPTT concluded that Congress should
prioritize equity, ensuring “that technology serves all people in the United
States, rather than facilitating discrimination or reinforcing existing
inequities.”119
The CRPTT also announced a set of equity principles beneficial to all that it
collectively believes should guide Congress in the prioritization of equity: “Ending
High-Tech Profiling,” “Ensuring Justice in Automated Decisions,” “Preserving
Constitutional Principles,” “Ensuring that Technology Serves People
Historically Subject to Discrimination,” “Defining Responsible Use of Personal
Information and Enhancing Individual Rights,” and “Making Systems Transparent
and Accountable.”120
Some of the CRPTT’s principles are facially generic for improving privacy and data protection for all people—namely, for promoting responsible information use, maintaining the Constitution, enhancing rights, and promoting transparent and accountable systems.121 One principle invokes communities of color: “ensuring that technology serves people historically subject to discrimination” in access to goods and services.122 Two other principles do not invoke African Americans explicitly, but are critical to dismantling the Black Opticon. These principles are “ending high-tech profiling” and “ensuring justice in automated decisions.”123 As I discussed in Part I, concerns about negative profiling and algorithmic injustice are high on the list of African American concerns about platform inequities. The CRPTT principles support abating wrongfully discriminatory oversurveillance, exclusion, and predation.
At this critical time of exploding technology and racial conflict, I believe that policy making should be explicitly antiracist. In addition to considering agendas concerning the general population, which are appropriate and foster strategic coalition building, policy makers should welcome and rely upon group-specific agendas for guidance and articulate race-based rationales for reform measures intended to protect data and data privacy. This dual approach, which can be termed “policy making for all and policy making for some,” will help to ensure that the interests of marginalized racial minorities are not overlooked, and aid in surfacing possible conflicts between the interests of one racialized group and other groups. For example, targeting Black men for high-tech modes of data surveillance based on race may address concerns of a majority about freedom from crime, but violate Black men’s entitlement to privacy and freedom from racist social control. The agenda I offer for assessing whether recent and pending legal reforms will help African Americans escape the Black Opticon is in precisely the same spirit as those adopted by the CRPTT, but toward the specific goal of disabling the Black Opticon. I specifically reference the African American experience through an African American Online Equity Agenda.
The keystone of the AAOEA is to direct design of privacy and data-protection-law policy reforms directly to pervasive problems of African Americans in the digital economy. Characterizing a Black Opticon of disparity and disadvantage is my way of succinctly denoting pervasive problems African Americans are facing online. While the Black Opticon frames my response to recent legal enactments and proposals, the AAOEA centers on five points of guidance for race-conscious, antiracist law and policy making, articulated as goals124:
1. Racial
inequality nonexacerbation goal: Design privacy
and data-protection policies recognizing that baseline data privacy and the
power data privacy confers may be unequally distributed along racial lines in
society, and that racial inequalities should not be exacerbated.
2. Racial
impact neutrality goal: Design privacy and data-protection policies
acknowledging that ostensibly race-neutral privacy policies may not have
race-neutral effects or protect all groups equally.
3. Race-based
discriminatory oversurveillance elimination goal: Design privacy and
data-protection policies that disable automated and nonautomated invasive and
excessive surveillance, monitoring, profiling, tracking, and identification of
African Americans.
4. Race-based
discriminatory exclusion reduction goal: Design privacy and data-protection
policies aimed at prohibiting online advertising and marketing practices that
exclude and wrongly discriminate on the basis of
African American race or characteristics that are its proxies, including
phenotypes, names, places of residence, or associations.
5. Race-based
discriminatory fraud, deceit,
and exploitation reduction goal: Design privacy and data-protection policies
aimed at reducing fraud, deceit, and scams targeting African American consumers
and exploiting their socioeconomic vulnerabilities.
In the next Section, I reference these agenda items to assess features of the recently enacted Virginia Consumer Data Protection Act, the proposed creation of an FTC privacy bureau, and a bill proposing an independent federal privacy agency. Although none of these potential reforms are dedicated to Big-Tech platform governance, comprehensive privacy and data-protection reforms generally bear on regulation of the digital economy with implications for the equitable regulation of personal-data processing by all online platforms.
C. Assessing Enacted State Law: Virginia Consumer Data Protection Act (2021)
American privacy law has become a fast-evolving field. State legislation already on the books in 2022 will surely be followed by additional state and federal measures, all of which are likely to reflect the global influence of the European Union’s 2018 General Data Protection Regulation (GDPR).125 At least six states—New York, Pennsylvania, Minnesota, North Carolina, and Ohio—were actively considering privacy and data-protection legislation in early 2022.126 The anticipated explosion of nonidentical state law may prompt a comprehensive federal measure, if only to rescue the national business sector from the inefficiencies of compliance with dozens of potentially inconsistent state regimes. In 2018, California became the first U.S. state to adopt a comprehensive data-protection law, and its reforms are still unfolding after a statewide ballot initiative expanded and amended the law in November 2020.127 Virginia came next with a comprehensive statute in March 2021,128 followed by Colorado.129 Although the Virginia statute borrowed from the GDPR and California measures, it differs significantly from both.
Looking at privacy and data-protection law through a lens of
the Black Opticon, the Commonwealth of Virginia
Consumer Data Protection Act (2021) (VCDPA) holds special interest as a case
study in possibility and disappointment. Of the first three states (including
California and Colorado) to exact comprehensive new privacy and data-protection
statutes, Virginia is the only state that belonged to the former
Confederacy.130 It is now saddled with a
highly visible legacy of African American slavery and legally enforced racial
segregation.131 Virginia has a larger
share of African American residents than either of the other early adopter
states. Indeed, approximately twenty-one percent of Virginians are African
American, compared to seven percent of Californians and just five percent of
Coloradoans.132 An ethnically and racially
diverse group of legislators sponsored the VCDPA, including its “chief patron,”
African American Assemblyman Cliff Hayes.133
The VCDPA, which will go into full effect January 1, 2023, boasts general antidiscrimination provisions,134 but it does not explicitly reference the interests of African Americans or antiracism as a legislative goal. No strong evidence, such as records of legislative debate, preambles, findings, or express provisions, displays conscious recognition of the first two AAOEA agenda items: that baseline privacy and its associated powers may be unequally distributed along racial lines in society, and that race-neutral laws may not have race-neutral effects.
Neither a civil-rights law nor an online-platform-governance measure as such, the VCDPA enacts a race-neutral consumer-information-protection regime applicable to businesses on behalf of all Virginians.135 The statute does not target global platform companies, but would apply to online companies of a certain size doing business in the state or with its resident consumers. Big Tech firms, including Microsoft and Amazon, fully endorsed the statute.136 Future of Privacy Forum, an organization supported by platforms such as Facebook, Google, and Twitter, praised it as a “significant milestone.”137 But critics have described the Virginia law as weak—even “empty.”138
Under the statute, consumers have a right to access, correct,
remove and know about “personal data” processed by data “controllers.”139 Data “controller” is a
term borrowed from the GDPR, defined in the VCDPA as an entity that determines
the means or purposes of data “processing,” which includes, among other things,
the “collection, use, storage, disclosure, analysis, deletion, or modification
of personal data.”140
Data controllers are responsible for data minimization, meaning that they may
not process more personal data than needed nor process personal data for
purposes other than those for which it was originally authorized and processed
absent explicit consumer
consent.141 The VCDPA defines “personal data”
to include “any information that is linked or reasonably linkable to an
identified or identifiable natural person,” but excludes employment data, as
well as “pseudonymous,” “de-identified,” and “publicly available” information.142 Like Article 28 of the GDPR,
the VCDPA requires that data controllers execute processing agreements with
partnering data processers.143A key feature adapted from
the GDPR, the VCDPA requires “data protection assessments” of consumer risks
and benefits.144 While the details are
unclear, such assessments could be required as a precondition even of
consensual algorithmically aided targeted advertising, the use of AI and
profiling, where there is a “reasonably foreseeable risk” that they could lead
to a discriminatory impact, privacy invasion, or other harm.145
On behalf of all Virginia consumers, the VCDPA governs the many activities of larger private-sector, nongovernmental data controllers, exempting from reach massive sectors of the economy, including state government and its subdivisions; HIPAA “covered entities”; financial institutions or data subject to the Gramm-Leach-Bliley Act; data subject to the federal Fair Credit Reporting Act; and data related to vehicle driver information, subject to the federal Driver’s Privacy Protection Act of 1994.146 Moreover the statute’s requirements do not apply to nonprofits, higher-education institutions, or employment activities.147
When these VCDPA coverage exemptions are assessed through the
lens of the AAOEA, it becomes clear that they may lessen the VCDPA’s capacity
to eliminate forms of oversurveillance, exclusion and predation online likely experienced
by African Americans in Virginia. As a group, Black Virginians have fewer
educational and financial resources than some other racial groups in the state;
about sixteen percent of the state’s African American residents, as compared to
just eight percent of the state’s white residents, live in poverty and are
vulnerable to financial exploitation and
abuses.148 Slightly over fourteen
percent of Virginia’s adult Black female residents and seventeen percentage of
the state’s Black male residents lack a high school degree.149 Only twenty-five percent of Black
women and twenty percent of Black men hold a college degree, the lowest
college-graduation percentage of any Virginia racial or ethnic group reported.150 Discriminatory credit,
employment, educational, and financial decisions are likely common experiences
of African Americans in Virginia, as they are elsewhere in the nation, and
persist in the face of existing federal privacy and civil-rights laws.151
The shape that the VCDPA took as a consumer-protection law targeting larger businesses not regulated by federal privacy laws may reflect practical strategies and compromises needed for speedy passage of any politically acceptable bill.152 Specifically, speedy passage may have been enabled by exclusions calculated to skirt federal preemption concerns under the Health Insurance Portability and Accountability Act (HIPAA), the Family Educational Rights and Privacy Act (FERPA), Gramm-Leach-Bliley, the Fair Credit Reporting Act, the Drivers Protection Act, and the Children’s Online Privacy Protection Act (COPPA), decades-old laws that themselves have not adequately protected African Americans.
But avoiding federal preemption would not explain all of the VCDPA’s sector exclusions. The rationale for exempting all nonprofits regardless of size—as well as commonwealth governmental entities, including the police, jails, and prisons153—is unclear, but likely relate to a felt need to make the legislation, which passed unanimously and quickly, uncontentious.154 The exemptions represent a lost opportunity to regulate or ban the use of facial-recognition technology by airports and state police,155 or to regulate business practices that involve public-private partnerships that scaffold the Black Opticon, as seen in the Geofeedia example.156 Because the VCDPA does not apply across the board to government entities, it does not address the threat of law-enforcement or public-agency oversurveillance, monitoring, tracking, profiling, or identification. Photographs and data based on photographs commonly used for facial-recognition analytics are excluded from “biometric” data protected under the statute,157 and these could presumably be shared by private platforms with Virginia authorities. While use of photographic data has a place in law enforcement, machine and human errors in the use of such data disproportionately impact African Americans.158
The VCDPA explicitly forbids the processing of personal data in violation of state and federal antidiscrimination laws.159 This is a plus from the point of view advanced by the AAOEA and the call for race-conscious privacy law, since many of the nation’s antidiscrimination laws refer to “race” discrimination and were enacted specifically to address the wounds and scars of slavery and Jim Crow. A “controller,” defined as “the natural or legal person that, alone or jointly with others, determines the purpose and means of processing personal data,”160is not permitted to provide different goods, services, or prices on a discriminatory basis.161 However, toxic forms of discrimination can creep in. The statute does not disallow targeted advertising—a practice known to be used discriminatorily to exclude Black people from opportunities and to facilitate predation162—but gives consumers the right to opt out of targeted advertising.163 Consumers can opt out of data processing used for profiling, but only if they have knowledge that such processing is or could be taking place. Consumer opt-out rights will only be meaningful if businesses facilitate the process of opting out to thereby increase the chances that African American and other consumers understand how they can and why they might want to do so.164
The statute’s privacy-notice requirement may help make opt-out rights somewhat more effective if the notices inform consumers of data uses consumers might wish to opt out of and the means and reasons for doing so.165 And it may be relevant that the statute defines “consent” as “a clear affirmative act,”166 arguably limiting businesses’ ability to rely on opt-out consent. That said, according to an Electronic Frontier Foundation analysis, the statute allows firms to charge higher prices to consumers who opt out of targeted ads, sale of their data, and profiling.167 This feature of the law raises a fundamental concern about discrimination reflected in the AAOEA’s background assumption that privacy, a vital good, is unequally distributed in society. If data privacy has a price, low-income consumers may be unable to afford it and will thus become the law’s privacy losers.168 The business sector’s interest in ad revenue must be assessed in the light of low-income consumers of color’s weighty interests in not having to sacrifice important forms of data privacy to access platform services.
Like the GDPR, the VCDPA treats racial data as a category of “sensitive
data,” restricting the processing of data regarding racial or ethnic origin,
religious beliefs, citizenship, and immigration status.169 Article 9 of the GDPR
provides that “[p]rocessing of personal data
revealing racial or ethnic origin . . . shall be
prohibited.”170 The GDPR regulates the
collection of race and ethnicity data, although public interest and consent
exceptions are allowed.171
The VCDPA likewise allows some processing of race and ethnicity data.172 It allows such processing
with a consumer’s consent,173
suggesting that whether race data and its proxies ought to be available should
be left to the individual to decide. Consent for race and ethnicity data
processing must be an affirmative act, but it is unclear what will be deemed to
constitute an affirmative act of opting into race data collection by those
interpreting the law for enforcement purposes once it goes into effect in 2023.
Treating race as private and sensitive personal data under state law may be detrimental to the interests of marginalized people of color. In 2003, a so-called “Racial Privacy Initiative” to prohibit public entities from gathering and using race information was put to direct citizen referendum vote across California.174 Widely opposed by communities of color fearing a disparate impact, an anti-affirmative-action agenda was indeed at the root of the Proposition 54 referendum.175 Hopefully, the politics of race in Virginia will not inspire attempts to attack beneficial forms of affirmative action in education and employment based on the spirit or provisions of the VCDPA protecting race and ethnicity data from nonconsensual processing. Since employment and higher education are exempted from the statute, this worry may not be much warranted.176 But without such exemptions, the neutral-seeming provision of the VCDPA limiting nonconsensual race and ethnicity data processing could have a disparate and negative impact on the interests of marginalized groups in private-sector race-conscious remedies and programs. Here, I invoke the “racial impact neutrality goal” of the AAOEA to assess legal reform. This goal requires privacy and data-protection policies to address whether neutral-appearing privacy policies assumed to protect all groups equally may have disparate impacts on African Americans.
Another neutral-seeming feature of the VCDPA may also have disparate impacts. The Virginia statute does not include a private right of action. Enforcement rests in the hands of the state Attorney General.177 The Virginia Trial Lawyers Association opposed the VCDPA on the ground that it will subject its residents to the shifting winds of politics.178 Were the duties of the Attorney General’s office to fall into biased hands, state protection pursuant to the VCDPA might be allocated to Virginians on a racially discriminatory basis. The neutral-seeming feature of not providing for a private right of action could disparately impact African Americans, dependent upon the discretion of authorities to vindicate their rights, and especially in a state recovering from a long history of enslavement, forced racial segregation, and social prejudice. Here, I again invoke the “racial impact neutrality goal” of the AAOEA to suggest a basis for disappointment in legal reform.
Despite promising features that could help fight discriminatory data practices in the future, the VCDPA favors Virginia businesses over consumers, and leaves alone Big Tech platforms processing Virginians’ personal data. While a complete assessment is premature, it is unlikely that the VCDPA on its own will do much to help dismantle the Black Opticon. Fortunately, some Virginia policy makers grasp the limitations of the statute relevant to the elimination of discriminatory oversurveillance, exclusion, and fraud. Of note, U.S. Senator Mark Warner described the VCDPA as merely a “first step.”179 Pertinent to the exclusionary surveillance-defeating goal of the AAOEA, Warner sees “the need to rein in so-called dark patterns, manipulative online tactics used to obtain more customer data.”180
African American VCDPA sponsor Cliff Hayes has been careful not to overstate the law’s significance as an answer to Virginians’ privacy problems. Furthermore, he understands that the statute is not a major boon for Black Virginians. On the contrary, he publicly stated that the VCDPA was a step-wise law, at first providing limited protection to consumers.181 Hayes has also expressed skepticism about widespread use of facial-recognition technology,182 noting the problem of higher levels of false positives for people of color and women, bias relating to the use of mug shots, and the importance of avoiding technology that perpetuates racial prejudices.183 Hayes would eventually like to introduce legislation to address data-privacy concerns related to artificial intelligence and facial recognition.184 Time will tell whether he can successfully advance legislation of special importance to African Americans through the Virginia state house. Full dismantling of the Black Opticon in the Commonwealth could require demonstrable convergence between the interests of African American Virginians, and the interests of the powerful elites and white majority.185
D. New Resources for the Federal Trade Commission
The FTC is without a doubt a major data-privacy regulator. This is true, notwithstanding the limitations of its jurisdiction, authority, and rule-making ability as a consumer and competition protection agency.186 As Daniel Solove and Woodrow Hartzog observed several years ago, “FTC privacy jurisprudence has become the broadest and most influential regulating force on information privacy in the United States.”187 The FTC enforces the Fair Credit Reporting Act, the Children’s Online Privacy Protection Act, and the Gramm-Leach-Bliley Act,188 and has undertaken to regulate data breaches, the internet of things, and of special relevance here, online platforms.189The Commission’s “consumer protection cases involving platforms . . . have also included policing disclosures and controls around in-app purchases by children, deceptive employment opportunity claims made by ride-sharing platforms, revenge-porn, and deceptive use of crowd-funding platforms.”190 The Commission has brought enforcement actions “against many major online platforms, including Twitter, Google, Facebook, Snapchat, and Ashley Madison,” alleging that in some of these cases, privacy or security practices were misrepresented to consumers.191 The FTC does not have a major track record of pursuing enforcement actions against platforms whose unfair or deceptive business practices target consumers belonging to marginalized communities, such as African Americans. This could change as a result of the confluence of three things: continued diverse leadership, dedicated funding for a privacy bureau, and a commitment to addressing the problems of communities of color as a strategic priority.
Diverse leadership at the FTC enhances its capacity to help advance the AAOEA. In September 2021, President Joe Biden nominated Big Tech critic and privacy-law expert Alvaro Bedoya to serve as the Commissioner of the FTC.192 He was the Founding Director of the Center on Privacy and Technology at Georgetown University Law Center, and a former Chief Counsel of the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law.193 An immigrant from Peru and naturalized U.S. citizen, Mr. Bedoya has demonstrated an understanding of the problem of racial-minority-targeting surveillance.194 Mr. Bedoya’s expertise could increase the effectiveness of the Commission with respect to identifying privacy concerns, setting priorities, and enforcing privacy laws.
The possibility of major congressional funding for a new FTC privacy division emerged in September 2021. The U.S. House Committee on Energy and Commerce voted to appropriate $1 billion to “create and operate a bureau to accomplish the work of the Commission related to unfair or deceptive acts or practices relating to privacy, data security, identity theft, data abuses, and related matters.”195 The proposed appropriation would be available to the FTC in 2022 and remain available until September 30, 2031 for carrying out these purposes.196 The new division would have the resources to aggressively punish unfair trade practices and vigorously enforce laws enacted by Congress. With the mandate to address “data abuses,” the new division would seem to have an enlarged capacity to attack discriminatory exclusion and scamming targeting African Americans—already a stated FTC priority.197 Were the proposed division to materialize, resources could be made available to enforce privacy laws with an unprecedented race-conscious zeal, as called for in the African American Online Equity Agenda. Indeed, some commentators argue that—with increased legal authority, funding, and more technologists—a new privacy division within the FTC would “not just protect ‘privacy,’ but would also address broader data protection concerns, including anticompetitive data practices and the use of data for fraud, racial profiling, and discrimination.”198
The FTC already has a race-conscious antidiscrimination agenda that could be pivoted to focus more specifically on improving online equity for people of color. In 2014, the agency established its “Every Community Initiative” to “modernize and expand the agency’s work and to develop a strategic plan for addressing disparities and other issues affecting communities of color.”199 In June 2016, the agency released a congressionally mandated report, Combatting Fraud in African American & Latino Communities: The FTC’s Comprehensive Strategic Plan, which reported on the outcomes of a “strategy to reduce fraud in Black and Latino communities . . . summarizing the FTC’s relevant law enforcement work as well as its targeted consumer outreach and education initiatives.”200 The report described an instance of race discrimination as measured through one of its enforcement actions: the victims of a payday-loan and bank scam were four times as likely to be African American than white or Hispanic.201
In 2021, the FTC released a second report, Serving Communities of Color, which describes the Commission’s “strides in addressing fraud in Black and Latino communities” and “expanded . . . efforts to include other communities of color such as Asian American and Native American communities, and other non-fraud related consumer issues that also disproportionately affect communities of color.”202 The report identifies specific contextual harms experienced by Asian American, Latinos, and African Americans.203 And it explains that the FTC, which emphasizes the importance of education and outreach in addition to enforcement actions,204 has brought about two dozen actions involving conduct specifically targeting or disproportionately impacting communities of color.205
Plaudits go to the Commission both for recent efforts at delineating harms specific to designated racial groups comprising marginalized communities and for its readiness to allocate resources to addressing them, now and in the future. From the vantage point of the African American Online Equity Agenda, the next step would be to focus more investigations and enforcement actions on allegations of online and platform-related fraud, deception, and unfair trade practices disproportionately affecting and targeting peoples of color.
Diverse leadership, additional funding, and stated priorities do not change the jurisdiction and authority of the FTC, which was founded about 108 years ago to combat fraud, deception, and unfair business practices.206 The agency has not been authorized to serve as an all-purpose national online privacy and data-protection regulator.207 Some platform problems characteristic of the Black Opticon may be beyond its current reach. Platform companies’ uses of artificial intelligence pose some of platform privacy’s biggest challenges, and those uses can be discriminatory or unfair to people of color and other consumers.
In a recent book examining the “investigative gaze” of businesses and governments,208 Robert H. Sloan and Richard Warner propose that an expanded FTC or “FTC-like” regulatory agency be “politically empowered and adequately funded with significantly expanded powers to make and enforce judgments of fairness” about whether uses of AI operate on a level playing field.209 Although they argue that it is plausible to think the FTC could regulate AI, Sloan and Warner do not make the case that Congress should in fact explicitly expand the jurisdiction of the FTC to allow for broad regulation of business uses of AI.210 Nor do Sloan and Warner take on the issue whether the FTC would begin to impose meaningfully large monetary fines on Big Tech, were violations found to have occurred under the expanded interpretation of FTC’s authority they propose.211 Expanded FTC jurisdiction pursuant to its investigatory, law-enforcement, and rule-making powers is not on the horizon, which fuels interest in an independent federal data-protection agency.
E. A Proposed Federal Data-Protection Agency
Now that we are in a digitally dependent age with a thoroughly digital economy, we cannot depend solely upon existing law enacted decades ago. We need new federal legislation. Were landmark twenty-first century privacy legislation to follow the lead of the Privacy Act of 1974—the federal statute regulating access to personal information held in federal government records and one of the first federal statutes specifically dedicated to information-privacy protection—it would be accompanied by findings and purposes.212 Preambles of finding and purpose accompanying congressional legislation inform the public about the issues that have led Congress to enact new law. They explain “what Congress hoped to achieve in enacting the legislation.”213 The Privacy Act of 1974’s findings included that the use of computer technology and the misuse of information systems can expose individuals to serious practical harms, and that the right to privacy is a constitutionally protected personal and fundamental right.214
Since 1974, harms associated with information technology have multiplied in number and severity. Congress might have found in 1974, as it could today, that privacy is a basic human right of international stature and a civil right.215 Unlike in 1974, Congress could today find that the right to privacy and related rights of data protection are deeply embedded in numerous state and federal statutes and in the basic law and statutes of jurisdictions around the world.216 New legislation could include findings that harms attributed to online platforms include some that disproportionately affect people of color burdened by racism and prejudice.217 In addition, the findings could reiterate that the ability to obtain and enjoy privacy is affected by structures of class, race, power, and privilege that the design of new law must address in the interest of equity and civil rights.218 In short, the findings of a new comprehensive federal privacy law could and should incorporate the assumptions of the AAOEA: privacy is unequally distributed; well-meaning privacy laws may have disparate impacts; and African Americans are especially vulnerable to data-privacy-related oversurveillance, exclusion, and predation. Including such findings would signal awareness of the special vulnerabilities of African Americans, educate those reading the law about that vulnerability, and prepare the public for provisions of new laws that referred to marginalized groups such as African Americans or drew upon the discourse of civil rights. A number of bills aimed at privacy protection were introduced into the 116th and 117th Congresses, none with preambles stating intentions to combat racial disparities as such.219 But an examination of the provisions of legislation introduced by Senator Kristen Gillibrand reveals equitable intentions and the potential for measures specifically responsive to the guidance of the AAOEA.
In June 2021, Senator Gillibrand, joined by cosponsor Senator Sherrod Brown, introduced the Data Protection Act of 2021 (DPA).220 Their bill would create an autonomous federal Data Protection Agency (FDPA) headed by a presidentially appointed director,221 decreasing dependence on the FTC for privacy-law enforcement. Whether the nation would need both an FTC privacy bureau and a general-purpose data-protection agency is unclear, since their precise parameters are not fully determined. But the bill does not presuppose major changes at the FTC and it would create durable institutional structures and mechanisms for realizing major reforms. Through the roles the DPA assigned its three divisions, the FDPA would enable consequential policy making, research, and law enforcement; protect against privacy harms and discrimination; oversee data practices; and propose remedies for the adverse social, ethical, and economic implications of data practices.222 The bill would also enable efforts to address what a Brookings Report refers to as high complexity, low consensus “hard issues”— namely, limits on data processing, algorithmic transparency, and algorithmic fairness.223
The Gillibrand-Brown proposal was unique among the several bills introduced in the 116th and 117th Congresses by other members. It alone called for the creation of a FDPA with a Civil Rights Office to “regulate high-risk data practices and the collection, processing, and sharing of personal data.”224 The definition of “high-risk” data practices reveals a specific (though implicit) legislative purpose to attack the Black Opticon. The bill defines a “high-risk data practice” to include an action by a data aggregator that involves: automated decision systems; data-respecting protected-class status, income, and criminal convictions; access to services, products, and opportunities; systematic processing of publicly accessible data on a large scale; profiling of individuals on a large scale; children, youth, and the elderly; people with disabilities; and geolocation processing.225 The “high-risk” data practices of particular concern to the statute are those of commercial data aggregators, defined as “any person that collects, uses, or shares, in or affecting interstate commerce, an amount of personal data that is not de minimis, as well as entities related to that person by common ownership or corporate control.”226 Big Tech platforms meet the definition of data aggregators, since they collect, use, or share more than nominal amounts of personal data in interstate commerce; their data practices would therefore fall under the purview of the FDPA.227
The FDPA would have the power to conduct investigations of possible violations, issue subpoenas, grant injunctive relief and equitable remedies, and, critically, impose civil penalties and fines.228 Fines of $3 million per day could deter large and small tech firms more effectively than penalties currently levied by the FTC.229 A portion of fees and assessments would be placed in a “Data Protection Agency Fund” to support agency activities.230To foster greater accountability to the public, the Act would mandate soliciting reports and examinations from large data aggregators, as well as agency review of mergers of large data aggregators or mergers involving the transfer of personal data of over 50,000 persons, and reports to the FTC and Department of Justice on the privacy implications of such mergers.231
Of course, creating a new agency costs money and takes time.232 But in the past, “Congress has repeatedly created new departments and new administrative agencies to meet problems arising as the nation and its economy matured.”233 The digital economy presents a serious set of problems for modern life that warrants a new administrative agency. The challenges posed by platform regulation are broad ranging, highly technical, and implicate core civil rights and civil liberties. The need to design and enforce nimble platform regulation stands among the reasons why the United States should take seriously the possibility of creating a specialized agency.234
Senator Gillibrand’s DPA is not likely to move through Congress soon or intact, but when and if it eventually does, some of its current provisions could become law. Setting a high bar for future legislative-reform proposals, the Gillibrand Act is striking for its deep responsiveness to calls for equitable platform-privacy governance. In the past thirty years, equity has not been a clear top priority of privacy legislation. The Act signals a new era, laying out a dynamic framework for an agency with unprecedented authority to pursue equity in the context of data protection through all three of its major units: Civil Rights, Research, and Complaints.235 Through its three functional divisions, the FDPA would have the authority to enforce new data-protection rules enacted by Congress or promulgated by the agency itself.
The protection of civil rights is increasingly recognized as an important component of privacy and data-protection laws, as evidenced by recently proposed federal privacy and data-protection statutes that contain nondiscrimination provisions.236The protection of civil rights needed to address the Black Opticon is manifest in the provision that the FDPA’s Office of Civil Rights would “ensure that the collection, processing, and sharing of personal data is fair, equitable, and non-discriminatory in treatment and effect.”237 The civil-rights equity goal is manifest in the provision that the Office of Civil Rights would aim at promoting the traditional civil-rights goal of equal opportunity through responsibility for “developing, establishing, and promoting data processing practices that affirmatively further equal opportunity to and expand access to housing, employment, credit, insurance, education, healthcare, and other aspects of interstate commerce.”238 Recognizing the importance of coordination and connection, the Office would “coordinate[] the Agency’s civil rights efforts with other Federal agencies and State regulators . . . to promote consistent, efficient, and effective enforcement of Federal civil rights laws”;239 would “work[] with civil rights advocates, privacy organizations, and data aggregators on the promotion of compliance with the civil rights provisions under this Act, rules and orders promulgated under this Act, and Federal privacy laws”;240 and would “liaise[] with communities and consumers impacted by practices regulated by this Act and the Agency, to ensure that their needs and views are appropriately taken into account.”241 The DPA defines “protected class” as “the actual or perceived race, color, ethnicity, national origin, religion, sex, gender, gender identity or expression, sexual orientation, familial status, biometric information, genetic information, or disability of an individual or a group of individuals.”242 The Office of Civil Rights would be empowered to investigate claims that members of a protected class are disadvantaged by platform practices or policies, such as a ban-optic advertisement-purchasing platform that prevented African American persons from viewing certain advertisements.243
The Act also establishes a Research unit whose responsibilities manifestly promote the ideal of equitable data policies and practices on online platforms. This unit would support enactment of comprehensive, well-informed, and equitable federal information-privacy laws. Research-unit responsibilities would include “researching, analyzing, assessing, and reporting” relating not only to “the collection and processing of personal data” and “the collection and processing of personal data by government agencies, including contracts between government agencies and data aggregators,”244 but also “unfair, deceptive, or discriminatory outcomes that result or are likely to result from the use of automated decision systems, including disparate treatment or disparate impact on the basis of protected class or proxies for protected class.”245 Staffed with data scientists and privacy-law experts, the Research unit would be charged with measuring the costs and benefits of “high-risk data practices,” which includes identifying their unintended consequences and assessing their potential disparate impacts and privacy harms.246 The Research unit’s mandate would go to the heart of concerns about the harms that stem from online platforms and disproportionately impact African Americans or others in protected classes. The Act defines “privacy harms” broadly to include economic, physical, and emotional harms.247 The threats and harassments people of color face online would appear by definition to count as physical harms, and the burdens of anxiety and stigma would count as emotional harms.
Further, with an ear to the ground, through the Complaint unit, the DPA would have the capacity to quickly identify and address online platform inequities. The Complaint unit within the new agency would be dedicated to collecting and tracking grassroot consumer complaints made by telephone or on a website. Incentivizing resort to the new agency, a “Data Protection Civil Penalty Fund” would be available to compensate individual and classes of victims of federal privacy-law violations.248
Through the design of the FDPA and its allocated
responsibilities, the Act boldly rejects some experts’ tepid approach to
civil-rights issues related to privacy
governance.249 Viewed through the lens of
race, Senator Gillibrand’s 2021 reform proposal merits praise. It prioritizes the
ability of the federal government to respond to the documented racial bias
against African Americans and other vulnerable groups through an
equity-conscious and protected class-conscious FDPA comprised of a trio of
civil-rights, research, and complaint-gathering units. The proposed Office of
Civil Rights could prove especially critical to addressing disparate impacts
and racial bias in algorithms and automated decision-making systems.250
Enthusiasm for the high bar set by the Act must be tempered by realism. It is uncertain what it will take for the DPA to go from proposal to reality and when. Within the United States, comprehensive federal legislation will require that Congress resolve issues of overlap, duplication, and preemption that will multiply as other states follow the lead of Virginia, California, and Colorado—and as the FTC potentially pushes ahead to establish its own in-house privacy bureau.
In addition, while features of the DPA discussed in this Section should enable meaningful measures to address platform equity concerns raised by people of color, it is not a cure for all of the unfounded surveillance, AI disparities, and exclusion and exploitation experienced by Black people online. The Act might hold platform firms more responsible to noxious content, but it cannot force racially biased platform users to leave people of color alone and regard fellow users with equal respect. No law can. And the Act does not, of course, address offline law-enforcement abuses. The Act might demand limits on uses by the public sector of facial-recognition technologies and biometrics, but it cannot prevent racism-related discretionary uses of force by police on the ground that violate expectations of privacy.
Conclusion
Simone Browne innovated “the concept of racializing surveillance,” defined as “a technology of social control where surveillance practices, policies, and performances concern the production of norms pertaining to race and exercise of ‘a power to define what is in or out of place.’”251 Digital platforms are racializing technologies in this sense. Despite some scholars’ rejection of the panopticon metaphor that it enfolds,252 the Black Opticon is a useful, novel rubric for characterizing the several ways African Americans and their data are subject to pernicious forms of discriminatory attention by racializing technology online. Online attention can work to keep Black people in an historic place of social and economic disadvantage.
Digital society and online platforms “reinforce and reproduce
racist social structures” through “software, policies, and infrastructures that
amplify hate, racism, and white supremacy.”253 They cause social harms
such as privacy loss, political harms such as threatening democratic discourse
and choice, as well as the abuse of economic
power.254 Platforms could in theory
use their resources voluntarily to counteract these abuses.255 Instead Big Tech struggles
with self-governing its platforms to deal with racist content and
discrimination in opportunity, services, and privacy. They gesture at change
more than fundamentally change. The paucity of people of color in management
and leadership positions in Silicon Valley worsens the situation since their
absence excludes “advanced-degree holders [in ethnic
studies] . . . with deep knowledge of history and critical
theory.”256
This Essay advocates for treating some of the ills affecting African Americans on online platforms with privacy and data-protection reforms, while recognizing that the complete remedy demands “a coordinated and comprehensive response from governments, civil society and the private sector.”257 I believe the Black Opticon of panoptic, ban-optic, and con-optic discrimination is amenable to attack by well-designed, race-conscious legal reform. Which of the three pillars of the Black Opticon will prove most amenable to destruction through privacy law is an open question I have not attempted to answer here.
Racially equitable policies and aspirations emerge to varying degrees in proposed and enacted privacy and data-protection law, such as the VCDPA, the proposed FTC privacy bureau, and the proposed federal data-protection agency. These reform agendas have a grave purpose, as grave as the purposes that motivated the twentieth-century civil-rights movements. Understandings of the specific impact of diminished data privacy and inadequate data protection on African Americans will continue to unfold. But in the meantime, we should consciously seek to regulate platforms—and, indeed, all of the digital economy—with an agenda that centers nondiscrimination, antiracism, and antisubordination on behalf of African Americans.258
I would like thank the Yale Law Journal, the Information Society Project and the Knight Foundation for publishing the innovative series Envisioning Equitable Online Governance, that includes my Essay. I would like to express gratitude to Professors Christopher Yoo, Cary Coglianese, Niva Elkin-Koren, Tamar Kricheli-Katz and Ezekiel Dixon-Roman for encouraging this Essay and giving me platforms on which to share its ideas. I thank Jeramie Scott, Senior Counsel at the Electronic Privacy Information Center (EPIC) for early guidance on looking at privacy through the lens of race, and my trusted Penn research assistants, Alexander Mueller and Matthew Brotz. Finally, my very special thanks go to Roman Leal for his extraordinary patience in guiding me through the editorial process at the Yale Law Journal.