Data Laws at Work
abstract. In recognition of the material, physical, and psychological harms arising from the growing use of automated monitoring and decision-making systems for labor control, jurisdictions around the world are considering new digital-rights protections for workers. Unsurprisingly, legislatures frequently turn to the European Union (EU) for inspiration. The EU, through the passage of the General Data Protection Regulation in 2016, the Artificial Intelligence Act in 2024, and the Platform Work Directive in 2024, has positioned itself as the leader in digital rights, and, in particular, in providing affirmative digital rights for workers whose labor is mediated by “a platform.” However, little is known about the efficacy of these laws.
This Essay begins to fill this knowledge gap. Through close analyses of the laws and successful strategic litigation by platform workers under these laws, I argue that the current EU framework contains two significant shortcomings. First, the laws primarily position workers as liberal, autonomous subjects, and in doing so, they make a category error: workers, unlike consumers, are subordinated by law and doctrine to the firms for which they labor. As a result, the liberal rights that these laws privilege—such as transparency and consent—are insufficient to mitigate the material harms produced through automated labor management. Second, this Essay argues that by leaning primarily on transparency principles to detect, prevent, and stop violations of labor and employment law, EU data laws do not account for the ways in which workplace algorithmic management systems often create new harms that existing laws of work do not address. These harms, which fundamentally disrupt norms about worker pay, evaluation, and termination, arise from the relational logic of data-processing systems—that is, the way that these systems evaluate workers by dynamically comparing them to others, rather than by evaluating them objectively based on fulfillment of ascribed duties. Based on these analyses, I propose that future data laws should be modeled on older approaches to workplace regulation: rather than merely seeking to elucidate or assess problematic data processes, they should aim to restrict these processes. The normative north star of these laws should be proscribing the digital practices that cause the harms, rather than merely shining a light on their existence.
Introduction
Despite widespread legal concerns about the technology industry’s surveillance of consumers,1 the most intrusive and far-reaching digital technologies for monitoring and controlling human behavior do not target people when they make or contemplate purchases. They target people at work. In many jobs and sectors, particularly low-wage ones, digital workplace technologies execute novel forms of labor control. In some cases, they even replace human managers, whose social and technical knowledge about a job, the workplace, and a particular worker might otherwise be used to make hiring decisions, determine quotas, allocate work, decide pay, evaluate performance, and make disciplinary or termination decisions.2
A growing number of workers, including so-called “gig” and “platform” workers (broadly defined as workers who are completely managed through smartphone applications), are now hired, evaluated, paid, disciplined, and terminated through automated systems, with little to no meaningful human oversight or intervention.3 Because platform companies often treat their workers as self-employed contractors who are not afforded the protection of established employment and labor laws, these firms have been uniquely positioned to experiment with remote algorithmic control and pioneer new forms of digitalized workforce management.4 Platform work, in this sense, has been a canary in the coal mine. Innovative systems of automated worker control, which originated in the platform context, have since been imported to other employment sites—including in the transportation, delivery, warehousing, hospitality, janitorial, healthcare, computer-science, and education sectors.5
These new systems of workforce management can be divided into two broad categories: automated monitoring systems (AMSs) and automated (and augmented) decision-making systems (ADSs).6 AMSs collect a wide array of personal data from workers both on and off the job, including data on speed, movement, and behavior, and then feed that data into ADSs to carry out or support a broad range of tasks, such as determining work allocation, communicating with a worker (via a chatbot), or evaluating workplace performance. ADSs (or offline procedures that heavily rely on ADSs) are also sometimes used to perform the most central functions of the employer: to determine whether to hire a worker, how much to pay them, when to discipline or reward them, and critically, when to terminate them.7
Proponents of the digitalization of labor management—including artificial intelligence (AI) companies, data brokers, employers, and some scholars8—argue that digital labor-management systems bring machine objectivity into the workplace via digital on-the-job surveillance and control, thus bettering the lives of workers by purportedly increasing scheduling flexibility and correcting for longstanding gendered and racial wage differentials.9 They also assert that these systems improve firm accuracy and efficiency while enhancing worker satisfaction.10
To be sure, together with appropriate legal safeguards and prohibitions, digital technology could be designed to help employers and workers achieve more fair, equitable, free, and democratic workplaces. To date, however, findings from sociotechnical research11 and the cultivated expertise of workers cast doubt on the purported positive impacts of existing systems. An emergent body of empirical research on workers who are digitally managed—including research on platform workers in the logistics and transportation industries—raises serious alarms about the social, economic, psychological, and physiological harms imposed by extant forms of AMSs and ADSs.12 Many of these harms can be understood as intensifying familiar problems. For example, research suggests that since datasets embody preexisting biases, the automated systems that rely on such data may replicate historical forms of discrimination in hiring and pay.13 Investigations have also found that as with human oversight and evaluation, machine errors are not uncommon, but they are hard to detect and correct, resulting in erroneous, unfair evaluations and terminations with no avenue for redress.14 Other studies observe that algorithmically determined quota systems can push workers to work too hard and too quickly, resulting in serious bodily injury and offsetting the last century of occupational health and safety interventions.15
By and large, these researchers suggest that the intensified workplace harms caused by the introduction of AMSs and ADSs are the result of “information asymmetries” between workers and their employers.16 Advanced AMSs invisibly enable employers to collect detailed data about workers, their movements, and their behaviors.17 This data is then fed into ADSs—including machine-learning systems—which generate black-box rules to govern the workplace.18 Scholars tend to assume that if workers had access to the data that is collected on them, along with knowledge of how it is used by ADSs, then they could use traditional legal avenues (for example, litigation, consultation, and collective bargaining) to challenge machine-generated mistakes and biases through the existing laws of work, just as they can challenge human-generated mistakes and biases.19 Likewise, existing scholarship tends to assume that if workers knew and understood the algorithmic rules that govern their workplaces, they could spot and correct violations of prevailing labor and employment laws, which already protect against unsafe workplaces, identity-based discrimination, low pay, and—applicable to the European Union (EU), but not to private, nonunionized workplaces in the United States—”unjust” terminations.20
Building on this research, the first wave of legislation to address the problems arising from digitalized labor control focuses almost exclusively on information transparency rights and mandates, including data access, data-processing explainability, and impact assessments. The undisputed legislative leader has been the EU. In 2018, the EU passed the first omnibus law to accord data rights to natural persons, the General Data Protection Regulation (GDPR), which has since been replicated in many jurisdictions around the globe, including in some U.S. states—most consequentially in California.21
Drafted primarily with consumers in mind, the GDPR also applies to workers, though comparably few have mobilized to exercise their rights under the law. More recently, in 2024, many of the rights embodied in the GDPR—including data-access rights, data-processing explainability rights, and impact assessments—were specifically mandated for platform work in the EU via the Platform Work Directive (PWD). The PWD also includes novel rights that are intended to directly address ADSs. For instance, the directive forbids platform firms from processing data on emotional, psychological, and personal beliefs, thus granting platform workers greater data-processing protections than any other workers in the EU.22 Also in 2024, the EU passed the Artificial Intelligence Act (AI Act), which labels the workplace a high-risk setting, a designation that triggers predeployment and postmarket safeguards for employment-related AI.23
Together, the GDPR and the AI Act create, for the first time ever, a web of critically important—if experimental—data and data-processing rights for the work context. The PWD then builds on these rights to extend even more data protections to a subset of workers—platform workers—who are almost exclusively managed by digital machinery. As the European Commission considers the possibility of an algorithmic-management directive that would extend the rights created through the PWD to other workforces, and as jurisdictions around the world consider laws and regulations to emulate the EU legislation, determining the efficacy of these first-wave interventions is critical. At the time of writing, however, we still know very little about how adequately these new rights address the significant harms and problems posed by on-the-job use of AMSs and ADSs.24
This Essay begins to fill this gap by offering a close study of these laws, along with an analysis of a recent natural legal experiment: pioneering litigation by platform workers who exercised their data and data-processing rights under the GDPR and won access to information about termination and pay. Ride-hail workers in the EU, supported by the nongovernmental organization (NGO) Worker Info Exchange (WIE), the App Drivers and Couriers Union (ADCU), and privacy advocates, were among the first to successfully challenge a platform firm’s refusal to release, in some cases, any data at all, and in others, only limited and insufficient data and data-processing information.25 However, in an unexpected twist, the success of this litigation proves the insufficiency of current regulation.26 While the years-long litigation led to monumental and precedent-setting judgements against ride-hail companies Uber and Ola, workers have been unable to leverage the litigation wins—and the data transparency and explanations achieved through these wins—to effect meaningful, systematic harm reduction.27
Through a critical analysis of this strategic litigation and the laws underpinning the litigation, this Essay argues that the first wave of data and data-processing rights for workers does not effectively address the harms arising from algorithmic management because it makes two conceptual errors. First, the laws treat workers as liberal, autonomous subjects. But by law, when people are at work, they are not free to behave autonomously. Rather, the law formally subordinates them to the firms for which they labor.28 Arguably, then, workers’ primary interests lie not in transparency, privacy, and consent, but in job certainty, wage security, and dignity.29 Moreover, given the explicit legal domination afforded to employers in the workplace, laws that place the burden on workers to access and understand data-processing systems, and then to use this knowledge to circumvent present and future harms, are of limited practical utility. Low-wage workers generally lack the resources, power, and technical insight to know when their employers are not adequately complying with their obligations under data laws.
Second, by leaning primarily on transparency principles to detect, prevent, and stop violations of labor and employment laws, the GDPR, the PWD, and the AI Act do not account for the ways in which workplace algorithmic-management systems often create new harms that existing laws of work do not address. These harms, which fundamentally disrupt norms about worker pay, evaluation, and termination, arise from the relational logic of data-processing systems. A worker managed through or with the assistance of ADSs may not be rewarded or disciplined based on an evaluation of their individual rule compliance, productivity, and effort.30 Rather, their intended behavioral modifications may be contextual and iterative, with variable outcomes, expectations or results based on how AMSs and ADSs understand and position them in relation to their coworkers in general and at any given time.31 As these data-processing laws are amended and expanded in the EU and as they are considered for replication around the world—including in California and other U.S. states—legislators, workers, and worker representatives should attend to the new harms of algorithmic management and address the shortcomings of existing data laws.
This Essay proceeds in three Parts. Part I analyzes the GDPR, the AI Act, and the PWD specifically as laws of work and examines their principal approaches to data and data-processing rights—notice, transparency, and impact assessments—in relation to the pressing problems and precarities produced through automated labor control. Part II then positions these data laws in relation to the broader law and political economy of the workplace and argues that they do not account for workers’ positionality as “illiberal” subjects—forbidden, by legal doctrine, from behaving in ways that are at odds with the business interests of their employers. Finally, Part III analyzes a natural experiment to extract lessons for future regulation of automated labor control. In particular, it examines the case study of Uber and Ola ride-hail workers who mobilized to vindicate their rights as data subjects under the GDPR in an attempt to address problems caused by ADSs related to pay and termination. The Essay concludes by recommending a guiding principle for future data laws, one that reflects older approaches to workplace regulation: regulation must move beyond merely elucidating and assessing data processes and shift more pointedly towards restricting the use of such data and processes where the systems cause harmful workplace outcomes.
I. the first wave of data rights for workers: the eu context
Despite the overarching data-minimization goals embedded in the GDPR,32 digital data collection and data processing in the workplace have grown dramatically in reach and sophistication since the law’s passage in 2016. From 2019 to 2022, coinciding with pandemic stay-at-home orders and new work-from-home policies, global demand for worker-monitoring software reportedly increased by sixty-five percent.33 Across service sites and product supply chains, this intensified digital monitoring was coupled with the development of sophisticated automated decision-making software, which businesses deployed to make management decisions more rapidly, to increase production or service speed and scale, and to lower labor overhead.34
Firms that self-identify as “platforms”35 and use what scholars have called a “platform management model”36 were among the first to experiment with what is now called “algorithmic management”—the automation of work processes and management functions, including coordination and control of a workforce, often via machine-learning systems.37 But the techniques of digitalized workplace surveillance and algorithmic management first observed in “platform work” were quickly adopted by firms with more traditional employment models.38 Accordingly, extant research on platform work is particularly useful for understanding trends in algorithmic management across the labor market.
Two particularly significant forms of algorithmic management, which this Essay uses to ground its analyses of existing data laws, are the uses of ADSs (1) to set wages (sometimes through the allocation of work or wage products) and (2) to evaluate and terminate workers. Through automated wage-setting practices, known in the platform-work literature as algorithmic wage discrimination, firms use social data39—including data extracted from workers’ labor—to “personalize and differentiate wages for workers in ways unknown to them, paying them to behave in ways that the firm desires, perhaps for as little as the system determines that the workers may be willing to accept.”40 While algorithmic wage discrimination—the transference of consumer price discrimination to the work context—was first documented in on-demand work, traditional employers have also commenced using machine-learning software to “tailor each employee’s compensation” in ways that remain opaque to the workforce.41 Similarly, “deactivation,” a euphemism for termination engineered by on-demand firms, has traveled to more traditional employment settings in which automated decision-making software is now used to invisibly and opaquely evaluate and dismiss workers, even in just-cause jurisdictions.42
Both automated wage-setting and automated evaluation/termination systems create novel harms and new logics of labor control, often allowing firms to hew to the letter of existing employment laws while evading their spirit. For example, in low-wage sectors, hourly wages are conventionally transparent to individual workers, certain, and set by individual or collective contracts. Though performance-based variable pay using offline evaluation processes and bonus structures is not uncommon, wage discretion is limited by laws that protect workers from discrimination based on protected identities and those that create minimum-wage and overtime-wage floors.43 Variable pay and discipline practices, even in the at-will employment context, typically operate through norms and logics that associate hard work, rule-following, and worker loyalty with higher pay and work security.44 But the novel logics of some data-processing systems, discussed further in Part II, disrupt these norms and introduce new experiences of uncertainty to the workplace, thereby unsettling the relationship between work and economic security.
Just as concerns about data and data-processing in the consumer context have largely focused on safeguarding individual data privacy and consent, concerns about data and data-processing in the workplace have focused centrally on transparency, to the detriment of other principles like fairness and economic security.45 According to the prevailing view among analysts, from which this Essay departs, the central problem with algorithmic management is that workers governed by such systems lack knowledge about the basic rules they must follow. In contrast to labor process customs of nondigital, offline scientific management, in which workers are typically informed of workplace expectations,46 workers are left to wonder: How are their wages determined? In what ways are they being evaluated and by what metrics? What is the world of behaviors that might lead to discipline or termination? Knowing what data is being extracted and understanding the logic behind the ADSs, observers argue, would enable workers to adjust to the digital labor processes and to address violations of existing labor laws. Following this reasoning, legislative authorities in a few jurisdictions, including in some U.S. states and in the EU, have moved to create transparency rights for workers or to extend existing data-transparency rights to the workplace.
In the following Sections, I examine the most prominent of these data laws in the EU—specifically, laws embodied in the GDPR, the AI Act, and the PWD—and analyze how they attempt to address the problems raised by algorithmic labor control. I focus on these laws because they, and in particular the GDPR, have become global models for workers’ data- and digital-protection laws.47 For example, the California Privacy Rights Act (CPRA), which is the most expansive and developed data-rights law for workers in the United States, is explicitly modelled on the GDPR. The EU, meanwhile, may soon consider adopting another algorithmic-management directive modeled after the PWD but applicable to all workers.
A. The General Data Protection Regulation (2016)
The GDPR, the first broadscale law governing data privacy for “natural persons,” went into effect in May 2018 and imposes “obligations onto organizations anywhere [in the world], so long as they target or collect data related to people in the EU.”48 In practice, the GDPR creates regulations “on the usage, storage and movement of data.”49 While the GDPR’s emphasis on making data usage explainable to natural persons is primarily aimed at allowing consumers to make informed decisions about the data collection and data processing to which they consent,50 these obligations can also be leveraged by workers who, by law, have very few privacy rights in the workplace. Even though “opting out” or refusing to consent to a data-processing system at work is effectively impossible without exiting a job, the GDPR provisions could, observers argue, at least help workers to understand how they are monitored and managed.51
The GDPR is a regulation, not a directive, which means that except in very specific instances, EU member states were required to adopt it into national law without changes.52 However, member states were allowed to modify how the law applied to employment, a formal recognition of the distinctive nature of work.53 Article 88,which governs data-processing rights in employment, gives significant leeway to each member state to adopt their own laws with regard to the “data subject’s human dignity, legitimate interests and fundamental rights, with particular regard to the transparency of processing [and] the transfer of personal data.”54 Member states developed a patchwork of data-processing laws in response to Article 88, with varying degrees of protection for workers,55 though these laws all reflect the GDPR’s general approach to workers’ data rights as articulated in Recital 4, which is to find a balance between an employer’s right to monitor their employees in the workplace and the employee’s right to privacy in the workplace.56 On its face, this approach pits the ideal of worker “consent”—once informed about data collection and data-processing, workers are free to exit the job—against the employers’ “legitimate interests.” It also neglects other worker interests, including economic security, with the unstated assumption that those interests are adequately addressed through the existing laws of work, including minimum-wage and just-cause regulations. However, as developed in Part II, given the legal deference to the managerial or employer prerogative, “consent” to workplace monitoring provides only a facade of privacy protections for workers who must work to live.
To date, the primary rights under the GDPR that have been utilized by workers and their representatives to gain transparency over data collection and automated decision-making systems are outlined in Articles 15, 20, and 22. On their face, these Articles allow workers to obtain their data and to understand the logic of the data-processing rules that algorithmically control them. However, even though personal data collected by employers are essentially valueless to workers in the absence of insight into why they are being collected and how they are being used,57 some employers have taken the position that the release of firm logics undercuts the competitive advantages created through algorithmic labor control.58 Consequently, while employers have been more forthcoming in releasing (at least some) personal data, they have been more reticent to release the logic of their data-processing systems.59
Nevertheless, the GDPR does mandate this kind of logic transparency.60 Articles 15 and 22, most critically, give workers the right to know the rules of the workplace—to understand the automated systems that are used to evaluate their labor, determine their wages, discipline them, and terminate their employment—and to contest the misapplication of these rules.61 Article 15 guarantees natural persons, including workers, the right to be informed about the existence of automated decision-making and to be provided with meaningful information about the logic by which these systems process their data.62 As a complement to this transparency mandate, Article 22 effectively provides workers with the right to have a “human in the loop” when decisions being made have legal or significant effects.63 The plain text of Article 22 mandates that while firms can rely on evaluations from ADSs to make workplace decisions—like terminations—that have significant effects on workers, they cannot rely solely on those systems.64
Article 20, meanwhile, gives workers the right to receive the personal data concerning themselves and the right to data portability. Article 12 requires such data to be provided in a “concise, transparent, intelligible and easily accessible form, using clear and plain language, in particular for any information addressed specifically to a child.”65 However, though many workers have requested their data under Article 20, the data they receive is often practically meaningless to them without further processing or visualization, and advocates argue that the companies “frequently omit the data categories most conducive and necessary for interrogating the conditions of work.”66 Given the obfuscating nature of digital systems, it is nearly impossible for workers (and regulators) to know whether the information requested has been properly made available. For example, in 2019, Uber provided telematic data in response to data-subject access requests, but they stopped doing so in 2020 and 2021.67 Workers who sought this data were left to wonder whether Uber had stopped collecting this safety data, or whether they just refused to release it to drivers for inspection.68 Without a full-scale public auditing of Uber’s systems, it is impossible to know.
Beyond the enumerated rights listed in Articles 15, 20, and 22, Article 35 of the GDPR contains another important safeguard against excessive monitoring of natural persons.69 The Article mandates that firms acting as data controllers carry out Data Protection Impact Assessments (DPIA) prior to processing personal data, if the processing is “likely to result in a high risk to the rights and freedoms of natural persons.”70 In the case of employment, however, this requirement has had little bite: though ADSs that process personal data often pose such consequential risks to workers, rarely are such impact assessments carried out or made public. One reason may be that firms narrowly interpret “personal data” to exclude “de-personalized” banded or grouped data derived from personal data.71 For example, a firm like Uber might repurpose personal data related to how often a worker rejects a ride to train machine-learning systems on what rides to allocate to that worker and when. But the ADSs that allocates the work might be using banded data, in which that worker is included in a subset of similarly behaving workers. Thus, a firm may decide that since only data derived from personal data is used to train the machine-learning system, a DPIA is not required for that system.72 Another limitation of Article 35 is the lack of guidance on what constitutes an adequate assessment. As Jacob Metcalf, Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, and Madeleine Clare Elish have written, “What counts as an adequate assessment, when that assessment happens, and how stakeholders are made accountable to each other are contested outcomes shaped by fraught power relationships.”73 This is a particularly salient concern for the workplace.
Since the implementation of the GDPR, many of the rights enumerated by these Articles have been undermined in practice. In some cases, firms have released the data to workers in non-machine-readable formats, making it impossible to analyze even when workers partner with data analysts.74 In other cases, definitional ambiguities have prevented workers from gaining the insights that they need.75 Companies have also frequently argued that releasing the data-processing logic is tantamount to releasing “trade secrets,” or that doing so would harm the security of others.76 In the absence of affirmative litigation—which requires substantial resources that most workers lack and puts workers at risk of retaliation—workers who dare exercise their rights must accept whatever data firms provide to them.
TABLE 1. summary of key data rights afforded to workers under the gdpr
B. The Artificial Intelligence Act (2024)
The AI Act, at the time of writing, is the newest of the European laws to safeguard against the potential impacts of AI systems.77 The Act follows a “risk-based approach,” reinforces GDPR data rights, and creates some new transparency and assessment mandates for the use of AI at work.78 In contrast to the GDPR, which places the burden on the worker to invoke their “right to know”79 when automated decision-making systems are being used, the AI Act directs employers to inform workers and workers’ representatives affirmatively that they are subject to these AI systems.80 But this affirmative duty does not include any requirement to explain the workplace rules or systems logics that are embedded in the AI, thus leaving workers in the dark about how their pay is determined, how they are evaluated, when they might be disciplined or terminated, and other consequential impacts of these systems. Together with the exercise of rights in Articles 15 and 22 of the GDPR, the knowledge that an employer is using AI systems may be useful during collective bargaining, but for the roughly seventy-seven percent of nonunionized workers across the EU member states, the notification by itself does little to curb any subsequent harm.81 Again, the underlying principle of this provision is one of consent: once a worker is informed of the use of the AI system, they are free to exit the job; if they stay, they are acquiescing to being subject to and managed by AI. For many low-wage, economically precarious workers, however, the exit option is illusory, and it becomes ever more limited as workplaces increasingly utilize machine-learning systems for labor management.
More promisingly, the Preamble of the AI Act outright bans the production and use of AI that emotionally manipulates people
to engage in unwanted behaviours, or to deceive them by nudging them into decisions in a way that subverts and impairs their autonomy, decision-making, and free choices . . . whereby significant harms, in particular having sufficiently important adverse impacts on . . . financial interests are likely to occur.82
The application of this prohibition to the employment context remains unclear. This prohibition could be interpreted to ban some of the interactive systems that on-demand algorithmic-management companies use to allocate work and determine pay.83 For example, if firms treat their workforce as self-employed (a problem addressed by the PWD84), then perhaps AI systems used to nudge workers to accept work that they would not otherwise accept and to prod them to move to places they would not otherwise move may be affirmatively prohibited.85 But in the context of legally recognized formal employment, such systems produced by the employer would likely be protected by the managerial prerogative.86 In those contexts, the AI would likely be treated as high-risk but not prohibited entirely.87
Indeed, the AI Act considers the use of most AI in the employment context to be unambiguously high-risk, an implicit recognition of the economic dependency on employment for survival and of the doctrinal implications of the managerial prerogative.88 The Act divides firms into “providers” and “deployers.”89 Employers who purchase AI to use on their workforce—the deployers—have limited obligations under the Act. Most of the regulatory onus falls on the providers of AI. Specifically, in recognition of the iterative and changing nature of machine-learning systems, the AI Act mandates that providers of AI that is developed for hiring, performance, management, and monitoring—including software that sets wages, evaluates, and disciplines workers—must develop a risk-management system by August 2026, when the regulation comes into force.90 This system must include testing mandates91 that follow a product through its life cycle, including in its post-market phase when the product is purchased and used by a deployer (the system is thus reliant on compliance by deployers with monitoring and reporting obligations).92 Providers must specifically examine how the system is “likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under [EU] law.”93
Responsibility for evaluation, recordkeeping, testing, and risk assessment likewise falls primarily on the provider, not on the deployer or on an unbiased, public third party.94 Instead of directly mandating public assessments of these systems at the deployment level, as would be ideal, the Act requires self-regulation by the firms that create the machine-learning systems, who are required to maintain human oversight and monitoring for specific issues—most relevant here, violations of the EU’s Fundamental Rights and the health and safety of workers.95 But the Act provides no guideline for evaluating harms related to the workplace. How is a provider to test for “health and safety” impacts? What are the criteria to evaluate a system that creates low and unpredictable wages in relation to worker health and safety? Does the emotional distress caused by an AI system that invisibly evaluates workers make the system “unsafe”? These are questions that remain unanswered. As with the GDPR, the lack of clear guidelines around harm and fairness calls into question the efficacy of these life-cycle assessments, even if they are carefully and inclusively conducted.96
C. The Platform Work Directive (2024)
While the GDPR and the AI Act offer rights to workers of all stripes, the PWD explicitly emphasizes that the rights it enumerates apply only to platform workers, who are granted more expansive data and data-processing rights than any other workers in the EU.97 “Platform work” is defined narrowly as “a form of employment in which organizations or individuals use an online platform to access other organizations or individuals to solve specific problems, or to provide specific services in exchange for payment.”98 At the time of writing, though the PWD has passed the EU Parliament, it has not been put into effect by member states.99 Thus, the analysis in this Section is speculative; nevertheless, this directive is particularly useful to evaluate because, compared to the GDPR and the AI Act, the PWD provides broader and arguably more-effective rights to a specific subset of workers who are subject to ADSs and AMSs.100 Unlike the two previously discussed bodies of legislation, the PWD was written with platform workers in mind and more expansively addresses the problems they face.101
Specifically, the PWD offers “more specific safeguards concerning the processing of personal data by means of automated systems in the context of platform work” and recognizes that “the consent of persons performing platform work to the processing of their personal data cannot be assumed to be freely given.”102 Unlike both the GDPR and the AI Act, the PWD reaches beyond transparency, consent, and impact assessments to affirmatively prohibit the use of certain processing of personal data relating to the individual’s body, mental state, protected identity, or personal beliefs.103 These are not full-scale prohibitions, however. For instance, the PWD may permit automated processing if the data is depersonalized through banding, a loophole that could affect groups of workers exercising their fundamental rights, including their freedom of association.104 Moreover, while it bans the processing of biometric data, it allows “biometric verification” such as the use of facial recognition technologies to identify workers, even though such systems have a higher false-positive rate for people of color and can lead to unfair termination.105
The PWD may also fail to attend to the structural realities of digital control. Critically, the PWD does not affirmatively prohibit automated decision-making in contexts related to hiring, pay determination, work allocation, discipline, and termination.106 Instead, it extends the rights embedded in Article 35 of the GDPR to the context of platform work by mandating that firms carry out impact assessments before new ADSs are deployed.107 Such firms must “carry out a data-protection impact assessment” to evaluate the impact of ADSs’ processing of personal data on the rights and freedoms of persons performing platform work.108 The firms’ assessment must be carried out every two years and shared with workers and workers’ representatives.109 One problem with this approach, however, is that by allocating the responsibility for this evaluation to the firms themselves (as opposed to mandating a public audit), the PWD, like the AI Act, neglects the enforcement problems that arise with black-box systems. Given the competitive incentives for firms to maintain secrecy around these systems, how does a worker or workers’ representative know that the impact assessment includes all the AMSs and ADSs that the firm deploys?
A second and more significant problem is that like the GDPR, the PWD fails to lay out meaningful standards or criteria for the impact evaluations of the ADSs or affirmative steps that must be taken if the ADSs are found to be harmful. The presumption embedded in the PWD is that if the assessment finds that the evaluated systems detrimentally impact workers’ fundamental rights or violate the labor laws of a particular member state, the firm will then refrain from deploying the system. But many of the harms experienced by platform workers—including those that arise from algorithmic wage-discrimination practices and automated termination practices—do not necessarily violate any existing fundamental rights or the labor rights enumerated by member states. For example, if an ADS uses personal data to determine a worker’s wages, as long as the wages do not fall below the minimum wage and as long as they do not differentially impact workers based on protected identities, they are not per se unlawful under existing employment laws. Indeed, even though such algorithmic wage discrimination has clearly identified harms to workers—such as increasing income uncertainty110 and workforce division111—an impact assessment by a platform company is not likely to capture these harms or consider them when deploying the systems, in large part because they serve the firm’s profit interests.
The PWD also contains transparency obligations in relation to AMSs and ADSs used by the platform company. On their face, these obligations are stronger than those embodied in the GDPR because they place an affirmative obligation upon the platform companies rather than relying on workers to exercise these rights. Per the directive, platform companies must provide information to workers
in relation to automated monitoring systems and automated systems which are used to take or support decisions that affect persons performing platform work, such as . . . their access to . . . work assignments, their earnings, their safety and health, their working time . . . , their promotion or its equivalent, and their contractual status, including the restriction, suspension or termination of their account.112
This may not only force firms to make their algorithmic logics public, but also make the implications of such systems the subject of public debate and contention. Still, the nature of machine-learning systems puts this outcome in question.113
Though the PWD has yet to be adopted by member states, we can make some predictions about its effects. First, because the PWD extends greater digital rights to “platform workers” than to other workers, the directive may invite firms to engage in definitional arbitrage not only with respect to whether their workers are “employees” but also as to whether they themselves are “platform companies,” thus undermining the potential impact of the law’s assessment and transparency obligations. Second, even assuming proper classification, there is reason to be concerned about the directive’s ability to curb harms caused by ADSs. As the case studies discussed in Part III show, transparency and information-sharing on their own are not immediately useful in the context of a workplace in which digital systems are constantly changing and in which firms rely on these systems to create competitive market advantages.
The most promising parts of the PWD are its outright prohibitions, not only because they affirmatively protect workers from technologies currently causing extensive harms across the EU, but also because they gesture toward the possibility of an alternative approach to ADSs and AMSs in which data laws reach beyond transparency to focus on direct harm avoidance. Indeed, an absolute ban on certain data-processing systems may be appropriate when the outcome of deploying such systems is likely to be fundamentally at odds with fair, equitable, and secure work. This idea is further developed in Part III.
TABLE 2. summary of key data rights afforded to workers under the pwd
II. workplace subordination and the new logics of workplace control
They are using Big Data as a replacement for the Big Boss.
—California-based Uber Driver114
Though welcome, the first wave of EU digital rights discussed above does not adequately address many of the harms specific to new forms and logics of automated labor control. In large part, as I discuss below, this is because the digital rights offered by these legislative initiatives—even the PWD—make a critical category error. They treat workers in the same way that they treat consumers: as liberal subjects whose primary interests are in privacy, consent, and transparency. But people work to live—to purchase necessities like shelter and food—and thus have a unique dependency on their employers. This economic dependency is compounded by the fact that in many legal systems, including in the EU and the United States, workers are not treated as autonomous equals when they are on the job; they are, by law, subordinated to their employer.115 The primary interests of workers, then, may be better understood as wage security, job certainty, and on-the-job dignity. The question then becomes: do data rights laws help workers to achieve these central interests?
As discussed below, in critical ways, data-processing systems may change the entire premise of workplace control, making collective knowledge of the rules embedded in the data-processing systems largely unhelpful to workers. Instead of operating through systems of clear, fixed rules and progressive discipline procedures in which workers are evaluated individually (as has been the norm under a previous generation of scientific management), firms that rely upon automated data-processing systems may control workers by situating them relationally to one another, creating iterative rules based on evaluation of the entire workforce. Evaluation, then, is collective and contextual, and may operate to continually modify worker behavior. Indeed, workers’ knowledge of the logic of the ADSs may even compel a race to the bottom, prompting them to behave in self-exploitative ways. As discussed herein, the legal subordination and dependency of workers, combined with the relational logic of data-processing for workplace control, inhibit the capacity of transparency, assessment, and consent mechanisms to create workplaces with certainty, security, and dignity.
A. Workers as Illiberal Subjects
Workers are, by law and circumstance, necessarily subordinated to their employers. Unlike “natural persons” in the larger polity—who, as consumers or even as citizens, can make basic demands of a firm or of the state without fearing economic or (ideally) political repercussions—workers are not empowered to behave independently of their employer’s interests. This means that, as a practical matter, rights to gain insights into the algorithmic logics of management are difficult for workers to exercise. And even when workers find a way to exercise such rights (as demonstrated by the litigation case studies in Part III), without powerful independent worker representation, such as through a union or NGO, it is nearly impossible for individual workers to make sense of the data released, ensure the information is comprehensive, or bargain over the terms of the AMSs and ADSs. The PWD directly encourages this kind of collective consultation in the narrow case of platform work, but it also presupposes the existence of such independent, representative bodies—which, in many cases, do not exist.116
The fact that employees (or workers functionally treated like employees) are legally subordinated to their employers is not solely, or even primarily, a product of the contractual specifications that govern any particular employment relationship. Rather, it follows from the legal doctrines that constitute employment. In contrast to most civil or commercial contractual relationships, the employment relationship is predicated on the prerogatives of the employer. The employer has—within certain legislatively inscribed or collectively bargained-for legal bounds—the unfettered discretion to control and direct the worker on the job (and sometimes, particularly as it relates to speech, off-the-job activities as well).117 Unless otherwise contracted for, an employer can control when a worker uses the bathroom, when they eat a snack, what they wear, and how they behave.
Empirical analysis has shown that even in the setting of “platform work”—where the companies dispute the classification of their workers as employees, and in most jurisdictions legally treat them as self-employed (an issue that the PWD separately addresses118)—firms have used the doctrine of managerial prerogative to confer a general prerogative of enterprise ownership.119 That is, they have maintained both that their workers are not employees and that despite this, the managerial prerogative allocates them the right to exert labor control.120 Uber, for example, maintains that as owners of enterprise, they can use digital technologies to coordinate labor operations, and that they do not need to be considered employers to do so.121 Workers for Uber, meanwhile, have little control over labor operations beyond when they begin and end their shifts, yet are denied the labor-law protections normally afforded to employees.122
The doctrine of the managerial prerogative is legally and ideologically reinforced in most U.S. and EU jurisdictions by versions of the common-law agency test that determines who is an employee.123 Though the specifics of this test vary by jurisdiction, most jurisdictions recognize that to benefit from employment and labor rights, the hiring entity must exert a high degree of control over “the manner and means” of how the work is conducted.124 Different versions of this test and different judicial approaches do not necessarily reflect a broad consensus of what “control” looks like—especially in digitalized labor control.125 Nevertheless, the underlying assumption is clear: employers have the presumed legal authority to “control” (or in the civil-law context, “subordinate”) the worker and the workplace,126 making the individual exercise of transparency rights difficult and risky.
In light of workers’ relative powerlessness in the workplace, their constant fear of termination, the risk of disciplinary repercussions,127 and the limited impacts of the rights themselves on workplace harms, workers are unlikely to individually exercise their digital rights to request data transparency or request access to or challenge the scope and validity of impact assessments. In the EU, unlike in the United States, workers labor under a default regime of just-cause protections—meaning they cannot be fired except for “just cause”—and thus cannot legally be fired merely for exercising their data rights.128 But even with such protections, the introduction of automated termination systems and enshrouding of workplace rules with algorithms make it difficult for workers to ascertain and contest pretextual termination, absent due process.129 Thus, not only are workers’ primary interests not directly represented by the existing web of data rights, but these data rights are also conceptually limited by the legal structures of employment such that they are inadequate vehicles for helping workers to achieve certainty, security, and dignity in the workplace.
B. From Individual to Relational Control
Data access can pour petrol on the fire. It confirms for us what our own intuition says is happening [in terms of how we are controlled]. But let’s not kid ourselves. We understand the logic and then the rule changes.
—James Farrar, United Kingdom-based former Uber driver130
In the collective context, transparency mechanisms may in theory empower workers to exercise their existing rights. For example, if the ADSs were allocating wages that fall below legislated minimum-wage standards, then transparency laws like those embedded in the GDPR and the PWD may be useful in holding the employer to the letter of the law and deterring them from non-compliance. However, in many cases, mere knowledge about algorithmic-management systems will not enable workers to understand or effectively negotiate workplace control, nor will such knowledge necessarily help workers to overcome new harms arising from control enacted through machine-learning systems. These failures are related. Not only are many of the problems posed by digitalized control new and unaccounted for by the existing panoply of work laws, but the systems of control themselves also depart from more familiar forms of scientific management. Rather than a definitive set of rules knowable to the employer and the employee, the iterative use of algorithms and data means that workplace rules for control are ever-shifting—aimed at dynamic behavior modification and instrumentalization.
Under traditional models of scientific management, worker efficiency and productivity are created through cognizable forms of rulemaking and application.131 Rules are generated through a careful analysis of work processes, with the aim of eliminating temporal and material inefficiencies in production and lowering labor overhead.132 Employers convey the rules to workers whose individual jobs include completion of one or more components of the production process.133 Workers are then individually evaluated by human managers for compliance with those rules.134 Workers who comply with rules keep their job; workers who violate rules lose their jobs or are otherwise disciplined.135 Ideally, workers who excel in compliance with workplace rules advance in their jobs and are rewarded with higher wages.136 As sociologist Michael Burawoy long ago observed, these approaches to worker control emphasize rule “compliance and obedience to management in the pursuit of a common interest.”137
Under workplace management that takes place through machine-learning systems, however, these logics and norms are disrupted: the rules are mutable, wages are not necessarily tied to individual rule compliance, and hard work may become technically disentangled from advancement and higher wages.138 Employers still break down processes and create foundational rules for each component of the work process with the goals of increasing production and decreasing labor costs. AMSs collect personal data on individual workers’ on-the-job behavior, and employers may purchase data about workers’ off-the-job and previous job behavior (including, possibly, where they live, how much they have historically been paid, and so on).139 This data—constantly collected—is fed into algorithmic systems that then train computers, iteratively creating new rules of workplace control. These dynamic rules aim to change the behavior of individual or banded workers.140
The iterative customization of management to modify worker behavior, however, qualitatively changes the mode of production, particularly the relationship between worker rule compliance and labor costs. Employers no longer have to decrease labor costs through temporal efficiencies gained by direct rule compliance by workers. For example, algorithmic systems can be used to minimize labor costs through the personalization of worker wages.
Thus, not only does the nature of algorithmic management make it impossible for workers to behave in ways that pave opportunities for advancement, but, based on machine-learning analysis and decisions, workers may also be differentially treated and paid, from moment to moment and from day to day. For example, while traditional models of scientific management include ascribing a fixed hourly wage to a given job, algorithmic management frequently uses dynamic wages (or “wage manipulators”) that seek to modify worker behavior.141 On one day, they may earn higher wages. On the next day, despite doing all the same things they did the day before, they may earn less. Evaluations are not necessarily made individually, based on a single worker’s behavior, but contextually, based on the worker’s behavior in relation to the population of other workers. Collectively understanding the logic of the decision-making systems, then, will not necessarily help workers to excel in their jobs, because the system may be designed to learn about and categorize behaviors and treat individuals or groups of workers differently, relative to each other.
Thus, automated data-processing systems may make unpredictability and uncertainty standard features of work. For instance, in contrast to offline management systems, algorithmic management systems will not necessarily reward loyalty and hard work—indeed, under such dynamic systems, it may not be possible to know what constitutes hard work. The relational logic of the systems both complicates the definition of hard work and makes it a moving target. As Uber’s own research suggests, for example, drivers who labor for longer periods of time typically earn less per hour.142 Likewise, leaked corporate documents about Amazon’s warehouse labor management reveal that workers are terminated when automated systems determine that their productivity levels fall to the bottom twenty-five percent.143 This means that workers can be fired not just for violating known workplace rules, but also for performing in ways that position them as perceived outliers in dynamic, digitalized productivity evaluation.144 The workplace rules no longer create a “common interest” between the employer and the worker, as Burawoy observed.145 Instead, the workers’ interest may become disconnected from the employer’s, severing the norms that used to connect workplace obedience and rule compliance with worker security.
III. the failures and futures of data laws as work laws
[N]o employer has given a full and proper account of the automated personal data processing. . . . This is a tool of resistance rather than [merely] a tool of retrieving information.
—Cansu Safak, Worker Info Exchange Research Lead146
One reason platform work has served as a laboratory for algorithmic management systems is that many firms that use platforms to control their workforces also maintain that those workers are self-employed.147 To maintain this facade, the firms have experimented with different forms of digitally enabled labor control.148 In addition to framing rules as “suggestions,” firms using platforms to manage their workforce might use the opacity and uncertainty of their pay, work allocation, and termination systems to compel workers into behaving in certain (sometimes self-exploitative) ways.149 Firms may use “wage manipulators,” such as surge pricing or bonus incentives, to compel workers to labor at certain times and for longer periods of time.150 They may use “algorithm updates” to alter worker behavior or to change how the firm distributes work and determines pay.151
In this context, platform workers have discovered the importance of having and understanding their data, which, at a minimum, can help them articulate why they should benefit from existing employment and labor law protections. In this Part, I examine the first strategic litigation of workers under the GDPR to gain access to their data and to the underlying logic of the data-processing systems that determine their pay and work allocation and flag them for suspension or termination. As discussed below, despite the successful litigation, access to such information has not had the kinds of impact that workers had hoped. Still, the litigation may be critical to establishing employment status and building on-the-ground resistance amongst an already-distressed workforce. And, perhaps most importantly, this strategic litigation illuminates the path that future legislation on data rights at work should take. Prospective legislation must not only tackle the barriers to transparency revealed through these cases, but it must also proscribe outcomes and algorithmic systems that undermine the basic interests of workers.
A. Strategic Litigation to Mobilize Data-Processing Rights for Workers
In 2016, James Farrar (alongside his coworker Yaseen Aslam) sued Uber, alleging that the company misclassified them as self-employed workers.152 After five years of litigation, the U.K. High Court agreed.153 But at the tribunal level, Uber argued that Mr. Farrar was not owed work protections because they allowed him to behave like a small businessperson; they did not even discipline him for declining a large percentage of rides.154 As an example, Uber showed that on week 27 on the job, he had worked for 91 hours, refusing 60% of rides sent to him. Mr. Farrar, flummoxed by this information and his memory of how hard he worked, located the “on-boarding document” that Uber had provided to him when he was hired.155 The document indicated that workers were expected to do 1.4 to 1.5 trips per hour to be considered productive, far less than he had completed.156 “This,” Mr. Farrar said, “[m]ade me understand that I needed to control my own data to [be able to prove I was] an employee.”157
Mr. Farrar went on to establish the Worker Info Exchange (WIE), a public-interest nonprofit in the European Union, with the mission of supporting platform workers in “navigating this complex and under regulated space.”158 Using the GDPR, WIE has made “data subject access requests” and “data portability” requests on behalf of individual workers to help them understand terminations or why their accounts have been flagged for fraudulent activity.159 In some instances, though making the request has been “extremely time consuming and capacity intensive,” they have enabled individual workers to get their jobs back.160 However, these requests, on their own, do not address the broader problems and harms of algorithmic management—the use of the automated systems that caused their terminations in the first place. Perhaps more alarmingly, WIE has found that “companies have shown a tendency to deny the data practices they do not wish to disclose.”161
WIE has also pursued strategic litigation that challenges the responses of specific companies to their data subject access requests. This litigation, which focused on the algorithmic control practices of the ride-hailing firms Uber and Ola, sought to learn how the companies allocated work, determined pay, assessed performance, and terminated workers—all of which, though basic aspects of work, were shrouded by firms. Exercising collective digital rights under the GDPR, WIE, working alongside the App Drivers and Couriers Union (ADCU) in the United Kingdom, represented eleven drivers based in the United Kingdom, the Netherlands, and Portugal seeking access to data, algorithmic transparency, and algorithmic protection from automated decision-making. In both cases, the workers won access to the information on appeal. Below, I analyze these cases and discuss the limitations of the GDPR data rights they successfully leveraged.
1. Ola Cabs: Transparency to Understand Termination
In June 2020, on behalf of three drivers who had been terminated by Ola, WIE and ADCU filed collective data requests under Articles 15, 20, and 22 of the GDPR.162 Using language from Ola’s privacy policy, WIE focused on requesting the drivers’ “fraud probability score” that Ola indicated that they relied upon, the “earning profile” of the workers, and the logic of work allocation.163 The drivers hoped to gain access to their own trip and transaction data so that they could check their payment calculations over time, and to better understand the automated decision-making relevant to work allocation, performance management, and dismissals.164 The workers also alleged, under Article 22, that they had the right to a human in the loop—to not be subject to automatic decision-making that “significantly affect[s]” the data subject.165
After WIE and ADCU’s initial victory against Ola for lack of compliance, the company appealed the lower court decision.166 Broadly, the appeal concerned (1) whether the automated decision-making triggered legal consequences for drivers or otherwise “significantly affect[ed] them,” which would mean that the ADSs would be subject to the data release, (2) whether Ola could lawfully invoke an exception to not comply with the request, and (3) if the data to be shared under the GDPR was indeed “personal data.”167 The Amsterdam Court of Appeal ruled largely in the workers’ favor, finding that the ADSs that produced the “fraud probability score,” “earning profile,” and journey allocation all fell under Article 22 and “significantly affect” the workers whose jobs were impacted by these ADSs.168 The decision referenced the European Data Protection Board Guidelines, which specify that Article 22 cannot be circumvented by a firm’s “feigning” human intervention.169 “To achieve genuine human intervention,” the court wrote, “the controller must ensure that any oversight of the decision-making process is meaningful and not merely symbolic,” and “[a]s part of its data protection impact assessment, the [data] controller must identify and record the extent of human intervention in the decision-making process and the stage at which it took place.”170 Ola argued that the relevant question under the GDPR is whether automated decision-making takes place “on the basis of” the fraud probability score.171 The court, however, held that the question was whether the score itself is “based exclusively on automated processing,” because the score had significant legal effects on the driver.172 The same was said about the drivers’ “earning profiles” and allocation of journeys.173
The court also rejected Ola’s claims that the information requested contained trade secrets regarding its business model and security measures taken by the company, as it found that the company had failed to substantiate these claims.174 Regarding explainability of the automated decision-making, the court wrote, “The information provided must be sufficiently complete for the data subject to understand the reasons for the decision . . . [but] it does not necessarily have to be a complicated explanation of the algorithms used . . . .”175 Ola’s initial response had thus been noncompliant with the GDPR because it was too brief and general. The company was subsequently ordered to communicate “the most important assessment criteria and their role in the automated decisions,” so that drivers could not only understand how decisions are made but also check the correctness of the systems as to their own work.176
Despite the success of the workers’ appeals, the data transferred by Ola to WIE has been, in the words of one advocate, “horse shit.”177 This is due not only to the tremendous amount of analysis that must be done to make sense of the data, but also because of the relational nature of these data-processing systems described above. The rules and logic of pay and termination have also changed since workers first filed their claims three years prior to the appellate decision. And, as in other instances, critical rules and explanations seem to have not been shared or released. For example, Ola explained how they allocated work to drivers as follows:
We use a combination of customer and driver personal data, such as: . . . booking cancellation history, booking acceptance history, distance from user, home location preference, payment method preference, fuel type of the car, lease details of vehicle, car maintenance history, proximity to customer, fraud probability score, [and/or] interaction history with customer care . . . to allocate drivers’ vehicles to requesting customers, and to determine the route and pricing.178
How are each of these factors valued and weighed? How can a worker use this information to make it more likely that they will be allocated good work? What else falls in the “such as” category? Without a public audit of Ola’s systems, the workers have no way of comparing what they were able to obtain from this successful litigation to the systems that Ola uses to verify their intuitions about how the systems might work.
Even with access to the data and the technical ability to analyze it, workers will remain at a fundamental disadvantage because firms that use ADSs and AMSs can quickly change their systems, undermining whatever knowledge workers might gain through transparency rights. Moreover, even if a worker has access to data collected on them and theoretically is also granted access to the logic of algorithms, translating that information into an understanding of how those algorithms affect their working conditions is not a simple or straightforward matter. Algorithms do not function like offline workplace rules. How does a worker translate the logic of an algorithm from the viewpoint of the firm to the experience of the worker? Is an algorithm that determines the allocation of bonuses as wage manipulators to incentivize a worker to work longer hours good or bad? Is it the bonuses that augment worker stress, or does stress arise from the algorithmic allocation of those bonuses—disseminating them in different amounts to different workers at different times? It is nearly impossible for workers to use the algorithmic information provided to them to identify or isolate the precise cause of their workplace harms.
2. Uber: Transparency to Understand Pay and Work Allocation
WIE also represented a group of eight Uber drivers in making another data-subject access request. Under GDPR Article 15, they requested a variety of information on data and automatic decision-making systems, this time related to how drivers are allocated work and paid.179 This information included requests to access the logic of Uber’s “batched matching system” (used to allocate work by matching drivers and passengers) and “upfront pricing system” (used to differentially determine base wages for each trip).180
Like Ola, Uber initially shared an insufficient set of data. When challenged in court,181 the company argued that the information requested contained trade secrets, and that providing it “could lead to circumvention of those processes [by drivers] and [also that] competitors could take advantage of it.”182
Appropriately, the Amsterdam Court of Appeal rejected Uber’s defense. Taken as a whole, the court found that these systems “affect[] [the drivers] to a considerable extent” and that such impacts on workers outweighed the company’s trade secrets claim; thus, under the GDPR, the company was obligated to explain systems of work pay and work allocation to workers.183 Although this case was decided in April 2023, as of this writing, Uber has yet to provide adequate information to the drivers. Instead, they have paid a high penalty to the workers for failing to comply with the order.184
Uber’s defense in this instance may also help us understand the limitations of transparency. Uber argued essentially that by knowing the rules of the workplace, workers could circumvent the management systems.185 On its face, this defense reveals the extent to which their system of control relies not just on opacity but on ADSs that situate workers in relation to one another asymmetrically. Knowledge of the algorithmic logic might advantage one worker over the other by allowing him to behave in ways that send him more work at higher wages; but because the system works relationally, if all workers had this knowledge and behaved accordingly, the managerial logic would be disrupted.
In this Uber case, as in the Ola case, workers were successful in leveraging their data rights because they acted collectively through the protection of both a union and a nonprofit. Not only did this enable them to make the initial data-subject access request, but it also empowered them to challenge the paucity of the companies’ release through litigation. Despite the landmark wins in both cases, workers were unable to change, circumscribe, or otherwise address the harms that emerged from the data collection and automated decision-making. Merely gaining access to the data and, in the case of Ola, to an explanation of the logics of pay and termination, has done little to stop what workers perceive to be arbitrary and abusive terminations and suspensions.186 Nor has it enabled them to overcome algorithmic wage discrimination, which has created unequal, uncertain pay for equal work.187
This is not to say, however, that these cases are unimportant for workers. As WIE points out, their significance is not so much in the details of what has been released, but in understanding that a high degree of control is exerted using automated systems. Making this kind of control visible—for example, by showing the nature of what leads to automated driver termination and the consequences of this kind of automation—helps establish that drivers’ on-the-job behavior is highly controlled and thus supports the claim that drivers should be eligible for employment protections. So, too, may these cases and their outcomes help build on-the-ground labor movements to contest the ways in which algorithmic management systems have disrupted workplace norms and, in particular, the connection between long, hard work and economic security.
B. Proscriptive Approaches to Digital Labor Control
What can we glean from the limitations of this first wave of data and data-processing laws? This Essay’s close study of the GDPR, the AI Act, and the PWD, alongside its close analysis of WIE’s successful, strategic litigation, reveal some key takeaways that may be useful to legislators or regulators seeking to expand data rights for workers.
One set of lessons applies directly to how future data laws may be crafted with an eye towards addressing the asymmetrical legal relationship between workers and their hiring entities. Data transparency should be a set of affirmative obligations of hiring entities, not, as per the GDPR, a right extended to workers that they must proactively operationalize themselves. Moreover, the entity using the algorithmic systems (not just the entity that produced them, as with the AI Act) should be required to carry out periodic impact assessments throughout the lifecycle of the systems. Finally, the data-processing systems used to digitally control workers should also be subject to periodic public or third-party audits in order to promote comprehensive compliance. Failure to comply adequately with data obligations should prompt not just state action but also private enforcement, a possibility currently precluded under some data-privacy laws, including the CPRA.
Future legislation should also address the myriad ways in which firms attempted to evade WIE’s data-access and explainability requests. Data releases must be made in ways that are machine-readable for ease of analysis by workers and their representatives. “Personal data” must be affirmatively broadened by statute to include all social data (such as banded or grouped data) that is derived from personal data, even if not clearly traceable to an individual. So, too, must legislation proactively address evasive legal arguments related to third-party safety and trade-secret claims to facilitate expeditious sharing of information. Merely stating, as the GDPR does, that trade-secret defenses should not necessarily inhibit data access requests is insufficient.
The final and most critical lesson derived from this analysis is that data transparency and even periodic, publicly available, and contestable impact assessments might not subvert some of the new harms created through algorithmic management systems. Given the nature of machine-learning systems and the threats that they pose to job security, wage certainty, and dignity at work, legislators concerned about automation at work should focus on the systems’ outcomes.
Traditional employment law does more than improve procedure and promote transparency: it provides substantive protections. Indeed, traditional employment law affirmatively safeguards the specific interests of workers in health, safety, security, and dignity by proscribing certain firm behaviors. It does not just require firms to pay workers, but rather affirmatively bans wages that fall below a minimum. And it does not just require firms to tell workers how dangerous a machine is, but instead creates standards for machine use to ensure human safety. Moving forward, as legislators seek to regulate algorithmic management, they can and should build more substantive protections.188 As algorithmic labor management further disrupts the normative connection between work, dignity, and economic security, some practices can and should be affirmatively redlined. For example, ADSs should not be allowed to set wages, determine the rules for termination, or terminate workers. Rather than merely governing the data, legislators should aspire to govern the use and outcome of data and data processes.
Conclusion
Data [transparency] rights can be part of a movement building model. You’re building worker knowledge, and workers make it part of their campaign. . . . [T]his is a continuous process, which unions have to be a part of . . . . It’s part of building worker power.
—James Farrar, Former Uber Driver, Founder of Worker Info Exchange189
As discussed above, the most prominent and far-reaching data laws for workers have originated in the EU. However, as these laws were modeled on and followed laws addressing problems faced by consumers, they tend to make faulty assumptions about the nature of the digital workplace. In placing a high value on transparency and algorithmic explainability, the laws presuppose that if a worker understands the rules embedded in the algorithmic management systems by which they are hired, paid, evaluated, disciplined, and terminated, then the online workplace is no different from the offline workplace. This assumption fails to account for the formal, legal subordination of workers to their employers—a subordination that makes full exercise of these rights difficult. Critically, it also misunderstands the nature of algorithmic labor management. Unlike traditional scientific management systems in which rule transparency creates the possibility of worker compliance, algorithmic labor control makes obfuscation of the rules a necessary part of the labor-management process. That is, algorithmic management works, in part, by evaluating workers dynamically in relation to each other, through a set of constantly changing, iterative rules.
As evidenced by the case studies discussed in this Essay, even knowing the basic logics of such a system does not necessarily help workers with rule compliance, as they are not judged individually but in relation to one another. Thus, while transparency of workplace rule logics and the privacy of workers are certainly important policy outcomes, they are insufficient by themselves for protecting workers. ADSs that result in new workplace practices and harms—like algorithmic wage discrimination and automated termination—should be addressed affirmatively through legislation that emulates more traditional, proscriptive laws of work. To this end, this Essay concludes that data laws focused on the workplace must affirmatively proscribe—not merely elucidate—these forms of worker control.
Professor of Law, University of California, Irvine; Postdoctoral Fellow, Stanford University; Ph.D. 2014, University of California, Berkeley; J.D. 2006, University of California, Berkeley School of Law; B.A. 2003, Stanford University. The author profusely thanks the many scholars and advocates whose thinking, feedback, and conversation informed this Essay. These people include but are not limited to María Luz Rodríguez Fernández, Meredith Whittaker, Benjamin Pyle, James Farrar, Cansu Safak, Anton Ekker, Sameer Ashar, Jack Lerner, Aziza Ahmed, Stacey Dogan, James Brandt, Zohra Ahmed, Salome Viljoen, Eletra Bietti, Amy Kapczynski, Sanjay Jolly, Yochai Benkler, Tyler Sandness, Katherine Neumann, Brishen Rogers, Edward Ongweso, Kathleen Thelen, Hiba Hafiz, Pauline Kim, Sarah Myers West, David Seligman, and Terri Gerstein. The author also thanks the organizers and attendees of the following conferences where this Essay was workshopped: the 2024 Harvard Law and Political Economy Technology Workshop, the 2024 MIT-Boston College American Political Economy Workshop, and the II Seminário Nacional do Movimento Advocacia Trabalhista Independente do Brasil. Additionally, I thank the Omidyar Network—and in particular Thea Anderson, Director for Digital Identity, and Amal Chaddha, Principal—for their generous support in funding the research that informs this Essay. Finally, I am deeply grateful to the editors of the Yale Law Journal, Yang Shao, Paige E. Underwood, Lily Moore-Eissenberg, Beatrice L. Brown, and Deja R. Morehead, for their patience, brilliant feedback, and excellent editing skills.