Interoperable Legal AI for Access to Justice | Yale Law Journal
Volume
134

Interoperable Legal AI for Access to Justice

14 March 2025

abstract. The access-to-justice gap is growing, affecting individuals with both civil and criminal needs in the United States. Though these challenges are multifaceted, procedural barriers in the U.S. legal system can often inhibit access-to-justice efforts. The resulting inequities undermine fairness for those interacting with courts and jeopardize the legitimacy of the broader legal system. Legal technology driven by artificial intelligence (AI) has been heralded for its potential to combat these challenges on three access-to-justice fronts that are often conceptualized in isolation: a consumer (i.e., self-help) front, a legal-service-provider front, and a court front. Progress on each of these fronts is apparent, though not at the pace or scale necessary to make meaningful inroads into closing the justice gap nationwide. The time has come to appreciate that, although progress on all three fronts is necessary for closing the justice gap and maximizing fairness, it is insufficient if there is not also some level of shared commitment and coordination across—and not just within—all fronts. This Essay argues that technological and procedural legal interoperability—that is, widespread consistency in technology design and related processes—should be at the forefront of these efforts, particularly as they relate to artificial intelligence. Further, although the consumer and legal-services fronts remain critically important, courts should be recognized as the necessary drivers in achieving this interoperable legal AI.

Introduction

The access-to-justice gap is growing. The COVID-19 pandemic1 and economic recessions exacerbated the crisis,2 and millions of Americans still lack access to resources to meet their civil legal needs.3 At the same time, the United States’s criminal-justice system continues to struggle with overworked public defenders and underresourced court systems, resulting in massive case backlogs. Though these challenges are multifaceted, procedural barriers in the U.S. legal system often inhibit access-to-justice efforts and deserve special attention. The resulting inequities undermine fairness for those interacting with courts and jeopardize the legitimacy of the courts’ processes and the legal system more broadly.4 This is an avoidable fate.

Legal technology driven by artificial intelligence (AI) has been heralded for its potential to combat these challenges on three access-to-justice fronts that are often conceptualized in isolation.5 First, AI has the potential to revolutionize how consumers identify, navigate, and ultimately solve their legal problems, either by helping them to do so on their own (so-called “self-help” tools) or by connecting them with legal professionals. Second, AI has the potential to empower legal-service providers to serve more consumers and achieve better outcomes. And third, AI has been envisioned as a promising means by which to streamline and improve courts’ legal processes that have historically limited access and hindered fair outcomes.6

It is no secret that progress must be made on all three of these fronts to maximize access and fairness. Indeed, much attention has been paid—and rightfully so—to the potential impact of enhanced legal-AI tools.7 But the impacts of these developments will likely be limited if court processes are not streamlined to account for the increased volume, variety, and technology-driven nature of cases. Similarly, the impact of AI-driven processes for legal-service providers may be limited if consumers cannot meaningfully participate in problem solving through the new media used by their providers and the courts. The inverse is true as well—progress in the courts will be meaningless if lawyers or litigants are unable to access or use AI tools effectively.

To date, legal scholarship has advocated for progress on each of these fronts. While progress is apparent, it is not taking place at the pace or scale necessary to make meaningful inroads into closing the justice gap nationwide.8 Access for access’s sake and efficiency for efficiency’s sake will not necessarily result in improvements to fairness, especially if AI is designed and implemented in ways that intentionally or unintentionally automate bias and magnify inequality.9 The time has come to appreciate that, although progress on all three fronts is necessary for closing the justice gap and maximizing fairness, it will be insufficient if there is not also some level of shared commitment and coordination across—and not just within—all fronts.

This Essay argues that technological and procedural legal interoperability—that is, widespread consistency in technology design and related processes—should be at the forefront of these efforts, particularly as they relate to artificial intelligence. Further, although the consumer and legal-services fronts remain critically important, courts should be recognized as the necessary drivers in achieving this interoperable legal AI. Fortunately, there are models for interoperable legal AI in other countries, making what might otherwise seem like a daunting prospect seem more feasible.

Part I begins by describing the potential of AI to make progress on the consumer, legal-service, and court fronts. This Essay then turns to the isolated progress seen to date in each of these areas and the long-term limitations of this progress absent interoperable legal AI. Part II studies Brazil’s focus on interoperability to establish its importance with regard to both technology and other processes across the legal problem-solving landscape.

Finally, in Part III, this Essay argues that courts must be the drivers of interoperable legal AI, underscoring the potential for interoperable legal AI to align with broader efforts in AI governance that would both support and be supported by courts’ efforts. Courts are traditionally followers as opposed to leaders when it comes to implementing new technology. In addition, local regulation of legal services and local variations in legal rules and processes present challenges that are in some ways distinguishable from those of other industries. It will therefore be important to analyze the prospect of interoperable legal AI within broader discussions of legal-regulatory reform, including my proposal for a national legal regulatory “sandbox”—a reform mechanism that would provide temporary safe harbors for testing innovative services and collecting data in areas of regulatory uncertainty. The proposed sandbox would promote standardization, transparency, and, ultimately, the technological and procedural interoperability necessary for AI to reach its potential as a tool to help close the access-to-justice gap and facilitate fair outcomes.

I. the limits of ai efforts with consumers, service providers, and courts

The legal problem-solving landscape has made commendable efforts to leverage legal technologies to make inroads in closing the access-to-justice gap.10 But the results have been too local, are limited in scope, and lack the scalability needed to maximize impact. In 2023, for example, the Georgetown Law Center on Ethics and the Legal Profession concluded that closing the justice gap requires substantial investment from the industry.11 The Legal Services Corporation reported in 2022 that, despite recent efforts, “[l]ow‑income Americans do not get any or enough legal help for 92% of their substantial civil legal problems.”12 This Part analyzes the isolated progress seen to date on each front and the limitations of long-term progress absent interoperable legal AI.

A. The Consumer Front

In some cases, technology-driven tools are helping people solve their own legal problems—ranging from creating their own wills and trusts,13 to drafting routine legal documents,14 to completing other tasks that do not always require the help of a professional.15 Nonprofessional assistance has always been in demand,16 and AI has stepped up to help meet it. A service called DoNotPay, run by a then-undergraduate student at Stanford, made headlines in 2016 when it helped overturn 160,000 parking tickets.17 HelloPrenup, a service designed to help couples with prenuptial agreements, secured $150,000 in investment on the popular television show Shark Tank.18 Rasa, a technology-driven app-based service, helps people in Utah assess their eligibility for expungement of their criminal records and, if eligible, navigate the process with the help of AI-enabled software.19 And, of course, LegalZoom has become almost synonymous with legal self-help, assisting “over two million individuals and small businesses by helping consumers prepare downloadable legal documents such as wills, prenuptial agreements, copyrights, real estate leases, and articles of incorporation.”20 In a landscape of regulatory uncertainty, some of these services have resulted in mixed receptions and results. For example, DoNotPay, which has evolved from helping consumers challenge parking tickets to assisting self-represented litigants in small claims court, has been on the receiving end of both awards for access to justice21 and lawsuits.22 But AI can also help people determine when their case warrants professional assistance and can connect them with appropriate professionals when needed.23 In this sense, we are seeing broader movement toward a democratization of legal information.24

This progress, of course, is a challenging endeavor when jurisdictions vary not only in their laws, but also in their rote procedural requirements, such as the design of their forms, processes for filing, and rules governing the use of AI tools. The Filing Fairness Project, an initiative of the Legal Design Lab at Stanford Law School, advocates for modernizing filing procedures in the civil justice system across multiple state court systems, recognizing that “indecipherable court forms and burdensome filing processes discourage participation and prevent many from asserting their rights.”25 The success of these efforts depends on a broad national commitment to promoting interoperability, which will require interdisciplinary and cross-industry collaboration that is currently inhibited by regulatory uncertainty. Indeed, innovation in this space is stifled by uncertainty as to whether certain tools and services constitute the unauthorized practice of law, which is defined and regulated differently across U.S. jurisdictions.26 In addition, partnerships between lawyers and technologists to develop and provide such tools are often hindered by the nearly universal prohibition in U.S. jurisdictions of nonlawyers holding any ownership interest in a partnership with licensed attorneys.27 Some jurisdictions are exploring regulatory reforms to balance self-help innovation with consumer protection, but not at the speed, scope, or scale necessary.28 As a result, designers of AI self-help tools are left to navigate widely varying terrain concerning both substance and process when trying to develop and deploy these tools on even a small scale.

B. The Legal-Services Front

For those cases that require the help of legal professionals, technology has also made noticeable and commendable strides.29 It is now widely recognized that AI has the ability to increase the efficiency of legal tasks30 ranging from intake,31 to eDiscovery,32 to legal research,33 to developing case strategy,34 and even to assisting with drafting legal documents, though not without high-profile misuses.35

But much of this development is happening in-house at the largest corporate law firms.36 And the most impactful generative-AI developments, like Harvey AI—essentially a more dependable and tailored ChatGPT for lawyers—are designed for and marketed to large firms.37 The “two worlds” of legal-technology development were recently observed in two vastly different legal-technology conferences: a “glitzy celebration of big law tech” at Legalweek, and the very modest and understated Legal Services Corporation’s Innovations in Technology Conference, “devoted to tech for access to justice.”38 Bob Ambrogi described the conferences as illustrative of the “funding gap between those who are developing legal technology to better meet the legal needs of low-income Americans and those who are developing legal tech to serve large law firms and corporate legal departments.”39

While it is possible that such services will trickle down to benefit those outside large law firms, services designed for one setting do not always translate well to others. Large firms may continue to thrive in the “golden age of AI,”40 but other providers will likely continue to struggle with resource, resilience, and relationship barriers that are exacerbated by regulatory uncertainty.41 With the means for in-house partnerships limited by ownership restrictions, and with cross-jurisdictional third-party development less robust and effective than that for large firms, most legal-service providers will face an uphill battle to maximize technology’s effectiveness in this fast-paced and complex ecosystem.

C. The Court Front

Finally, the least discussed but perhaps most important players in this landscape are the courts. The most visible technological innovation in U.S. courts in recent years has been the digitization of court forms, which has facilitated electronic filing and other electronic case management.42 For self-represented litigants, many courts have also made efforts to digitize court documents, post free legal forms online for litigants, and (in fewer jurisdictions) even provide computer kiosks to help people navigate their interactions with the court.43 In civil litigation, courts have also been involved in overseeing eDiscovery practices by litigants, including by assessing whether proper search terms and coding are being used by both sides throughout the process.44

Of course, many jurisdictions appreciate that a truly efficient ecosystem is one in which technology can help prevent many cases from needing to reach the courts in the first place.45 Indeed, alternative dispute resolution—the process of settling disputes without litigation—has blossomed into online dispute resolution (ODR) through the use of algorithms to overcome the cost and limited availability of human mediators.46 When courts nevertheless become involved, some have also embraced ODR as an option for a wide range of processes at this stage,47 sometimes turning to private-sector-developed ODR resolution systems.48

All of these efforts have resulted in massive amounts of data, and some coordination across certain state courts is emerging in ways that are encouraging for eventual broader coordination on more complex interoperable legal AI. For example, the National Open Court Data Standards (NODS) initiative has been developed by the Conference of State Court Administrators and the National Center for State Courts (NCSC) in the form of “business and technical court data standards to support the creation, sharing and integration of court data by ensuring a clear understanding of what court data represent and how court data can be shared in a user-friendly format.”49 The primary goal of the initiative is to ease responses to data requests and improve the accuracy and utility of those data.50 The initiative currently serves only to collect and cleanse data, not analyze or interpret it, and “includes only data that courts collect for internal business purposes and that are potentially useful for non-court data requesters.”51

Calls have increased for better and more open data collection, management, and standards, particularly among state courts.52 While these data-focused issues are important, recent advancements in legal AI present additional challenges and opportunities and will require broader considerations and coordination. As AI on a broader scale continues to make inroads into other parts of life, courts still lag behind, and the public is likely to expect more from the courts in the years to come.53 One major challenge for courts is spearheading AI efforts on a local level. There are a number of national organizations for courts, judges, and court administrators,54 but as Cary Coglianese and Lavi M. Ben Dor have recognized, “there currently exists no centralized repository of applications of artificial intelligence by courts and administrative agencies,” and “[g]iven the federalist structure of the United States, the development and implementation of AI technology in the public sector is also not determined by any central institution.”55 As such, there is no leadership or centralized national coordination among courts in the United States on the implementation of court-oriented technologies, which can be a major impediment to the adoption of novel technologies.56 Furthermore, jurisdictional variation is staggering: “According to the National Center for State Courts, approximately 15,000 to 17,000 different state and municipal courts exist in the United States,”57 and “[a]ny one of these numerous judicial or administrative entities could in principle have its own policy with respect to electronic filing, digitization of documents, or the use of algorithms to support decision-making.”58

Interestingly, perhaps the most uniform use of technology across jurisdictions is also the most controversial: algorithm-driven criminal-risk assessment.59 These assessments “[use] risk factors to estimate the likelihood (i.e., probability) of an outcome occurring in a population,”60 such as committing a crime, failing to appear in court, violating parole, or engaging in substance abuse.61 As recently as 2021, some form of risk-assessment formula or aid in sentencing had been adopted in all but four states,62 with many states using either the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) or LSI-R (Level of Service Inventory-Revised) as their algorithmic tools.63 Critics of these tools have been vocal in the mainstream media,64 and this Essay does not argue that such efforts should be endorsed in moves toward interoperability.

D. The Limits of Progress Absent Interoperable Legal AI

Each of these fronts’ limitations are exacerbated by the variations of local rules, regulations, and approaches to legal AI in a world where technology neither waits nor recognizes borders. To the extent that progress on the self-help and legal-services fronts continues, their limitations are likely to be further magnified if courts are not prepared for an increase in cases resulting from the rise of legal AI. Kristen Sonday has observed that “[t]he impact of . . . pro se tools are profound because technology allows them to be scalable and replicable, serving more individuals than ever before.”65 Quinten Steenhuis, Clinical Fellow at Suffolk University Law School, has further noted that, although “[t]echnology [has] taken hundreds of hours of work . . . now we can reach thousands of people who otherwise couldn’t access the court.”66 But courts already struggle to keep up with caseloads, and they are likely to struggle even more with any kind of increase, absent changes of their own.67

Widespread court responsiveness and preparation will also be essential to solidifying the role of courts in broader efforts to ensure that legal AI promotes rather than inhibits access to justice. As Colleen F. Shanahan and Anna E. Carpenter have recognized, improving fairness and equality will require more than merely simplifying court procedures.68 In a previous work advocating for a “national legal regulatory sandbox” to test safely innovations for justice-gap impact and consumer protection, I identified challenges faced by local technology and regulatory reform efforts in light of economic and expertise constraints, as well as empirical challenges, that are more appropriately and effectively addressed at the national level.69 The risk of not overcoming these challenges is the further entrenchment of a two-tiered, wealth-based system of legal services, which could manifest itself in several ways. For example, if low-income individuals are relegated to technology-driven tools and services even when human-driven assistance would be more appropriate, it might be better than nothing, but still not as good as professional human services.70 But the opposite could also be true: legal technology could become incredibly powerful and effective, but not evenly distributed.71 Whereas large firms serving wealthy clients and corporations will have the means to integrate these technologies into their practice, small firms and solo practitioners may not have the resources, resilience, and relationships to do so.72 Moreover, some fear the AI-driven access-to-justice narrative is overhyped and will not significantly alter the status quo of today’s two-tiered system, where not everyone can access legal services.73 These two-tiered systems of inequality are not mutually exclusive.

Without large-scale coordinated efforts across all three fronts to level the playing field and facilitate necessary interdisciplinary and cross-industry collaborations, legal AI risks consolidating power, automating bias, and magnifying inequality. Overt bias has manifested in GPT-driven bots making racist statements due to their reliance on internet-based language, including from websites like Reddit that feature toxic discourse that is “scraped” to develop responses.74 But bias can also manifest itself more subtly. As Daniel N. Kluttz and Deirdre K. Mulligan have observed, “[P]redictive algorithmic systems embed many subjective judgments on the part of system designers—for example, judgments about training data, how to clean the data, how to weight different features, which algorithms to use, what information to emphasize or deemphasize, etc.”75 Without careful consideration during design, racist outputs can result.76 And bias is an especially serious concern when AI is implemented within the government.77

National efforts to increase access to justice, minimize the risk of bias, and ensure fair and accessible AI-driven legal tools and services will be far more likely to succeed if there is a foundation of more consistent—that is, interoperable—technology and processes across the legal problem-solving landscape. The remainder of this Essay argues that such interoperable legal AI should be at the forefront of priorities in this space in the coming years, and that such a priority is both worthwhile and practical.

II. interoperability as a key to maximizing ai’s access-to-justice potential

A. The Pillars of Interoperability in the Legal-AI Landscape

Interoperability is far from a new concept. It has been defined “in the broadest sense” as “the ability of people, organizations, and systems to interact and interconnect so as to efficiently and effectively exchange and use information.”78 Building off existing efforts more narrowly focused on modernizing filing procedures and standardizing the dissemination of certain court data in certain court systems, interoperable legal AI would have broader aims involving more stakeholders. In order for the U.S. court system to thrive as an “interoperable ecosystem” that facilitates AI development and widespread access to legal information, it must achieve five key pillars that have been widely associated with interoperability across different industries and settings, including government. Each of these pillars—technical interoperability, organizational interoperability, legal and public-policy interoperability, semantic interoperability, and socially informed interoperability—is discussed below.79

Technical Interoperability. At the heart of interoperability is “technical interoperability,” or “[t]he ability to operate software and exchange information in a heterogeneous network.”80 This can be achieved in a number of ways, such as by collaborating on product design to ensure compatibility or by otherwise setting technical standards across the ecosystem.81 In courts specifically, the widespread and consistent use of open-source software could increase transparency into the judicial system and facilitate cross-sector collaboration.82 As legal technology continues to advance, technical interoperability will need to expand beyond focusing on the underlying data to also include the more technical aspects of emerging AI.

Organizational Interoperability. For interoperability to flourish, there must be education, buy-in, and an alignment of goals across the ecosystem.83 A function of courts is information processing, from facilitating the initiation of a case to overseeing proper procedure to delivering an outcome.84 Each of these aspects could benefit from the streamlining that interoperability can facilitate. But the benefits of interoperability can also further broader legal-system goals concerning access to justice, including the obligation that stems from lawyers’ ethical obligations to combat the justice gap.85 Under the Preamble of the American Bar Association’s Model Rules of Professional Conduct, “[A]ll lawyers should devote professional time and resources and use civic influence to ensure equal access to our system of justice for all those who because of economic or social barriers cannot afford or secure adequate legal counsel.”86 Interoperability would also help lawyers meet their obligations under several specific rules invoking access to justice, including the duty to provide pro bono services87 and the requirement that fees be reasonable.88 If interoperability can maximize the widespread development and effectiveness of AI-driven tools for individuals and legal-service providers, efforts to increase the affordability of legal assistance will be greatly aided.

Legal and Public-Policy Interoperability. This pillar recognizes that interoperability efforts implicate laws and public policy and therefore sometimes require legal and regulatory changes.89 These issues have “arise[n] in the contexts of regulated industries . . . or in government enterprises, such as law enforcement, counter-terrorism, and intelligence.”90 The courts represent a government enterprise that is similarly large, complex, hierarchical, and geographically dispersed. Exploration of interoperability principles would be well situated within ongoing discussions surrounding legal-services regulatory reform,91 including the possibility of a national legal regulatory sandbox that would centralize expertise and other resources, reducing the burden on individual jurisdictions.92

Semantic Interoperability. Another fundamental aspect of interoperability is that all participants “speak the same language.”93 In other words, “the semantics and syntax of communication must be formalized in such a way that users know the appropriate inputs and the computing system recognizes meaning with few errors.”94 For courts, such interoperability would increase the volume of data available to efforts that require large datasets.95 The key to unlocking this potential is “data integration,” which “reconciles data from many data sources with different formats and semantics into meaningful records.”96 Obviously, this is challenging in a parallel federal-state court structure that spans fifty states and the federal court system, not to mention variations at the local level within each jurisdiction.

At a foundational level, an AI-friendly court system requires access to information about both the law and one’s individual case.97 Despite some progress, courts could do much more to improve access to such information. For example, AI on the self-help and legal-services fronts could more effectively make use of case law if cases were more uniformly “machine-processable,” which would require ensuring consistency in structure and certain terminology before publication.98

In addition, more uniform and centralized data could facilitate AI-driven insights into the fairness of the legal system, which would help ensure that access to the courts actually leads to justice.99 There is a recognized need for such information. For example, the interdisciplinary collaboration, Systematic Content Analysis of Litigation EventS Open Knowledge Network, was recently awarded a National Science Foundation grant to build a platform “to address the dearth of accessible information about who is prosecuted and convicted and what kinds of ultimate outcomes they experience,” overcoming the existing “lack of nationally-accessible and linked data available across the United States.”100 Such data, if effectively leveraged in AI-driven analysis, could be used to, among other things, detect bias in judicial decisions that might be difficult to detect absent the assistance of AI.101 Grant funds for such efforts, however, are unavailable in many circumstances, making the prospect of AI uniformity daunting,102 especially if jurisdictions are charged with spearheading such efforts on their own. But efforts to make court processes simpler as part of “semantic interoperability” is not antithetical to the way court systems work; indeed, “complexity reduction” is at the heart of court processes.103 Ensuring that courts are “speaking the same language” when it comes to AI integration would help them manage data and glean insights into how to improve court processes and produce just outcomes.

Socially Informed Interoperability. A final pillar recognizes that “[d]ifferences in cultural, religious, and intellectual perspectives and values, and political, social, economic, and strategic goals may shape how governments or communities approach the goal of achieving interoperability,” and that “[t]hese factors will have an influence on decisions about each facet of the interoperability ecosystem and whether or how a society or government will consider the broader interoperability ecosystem.”104 In the context of courts, these considerations will range from accounting for bias,105 to the prospect of automated decision-making in criminal risk assessments106 or other areas, to the ability of innovators to design access-to-justice-oriented tools that are compatible in key ways with courts across the country.

B. A Comparative Case Study: Brazil’s Interoperable-Legal-AI Efforts

National legal AI interoperability in the United States would not have to operate on a blank slate. Current discourse on the narrower role of data in the legal services landscape could expand to encompass interoperable AI by looking beyond our borders. For example, leaders could learn from efforts undertaken in Brazil, led by the Brazilian National Council of Justice, which oversees the largest judicial system in the world.107 In 2020, the Brazilian judicial system had a backlog of seventy-eight million lawsuits and what a report to the National Council of Justice called “substantial challenges in case flow management and a lack of resources to meet this demand,” which required “[d]rastic solutions.”108 A first wave of efforts to leverage AI in the Brazilian courts resulted in “a seemingly uncoordinated algorithmic universe in the judicial system,”109 leaving the country lacking “a clear policy direction for the use of AI in the judicial branch and clear mandated policy principles to ensure that AI is used ethically and safely.”110 Researchers observed that “[c]ourts [were] not communicating with the [National Council of Justice] or other courts regarding the development of their own tools,”111 despite AI having been used by courts for everything “from classifying lawsuits, to preventing servers from completing repetitive tasks, to even providing recommendations for a court ruling.”112

In response to these challenges, an academic group from Columbia University partnered with a Brazilian nonprofit research institute to “design a collaborative governance structure to strategically integrate all AI initiatives in the Brazilian judiciary.”113 The project’s three objectives were (1) to assess the different AI tools already developed in the judiciary to create a model for integration and standardization, (2) to design a collaborative governance structure, and (3) to create a proposal for aligning the management model with international best practices.114 The project’s report—the “Brazil Report”—also called for implementing and supporting open-source software in the courts, facilitating opportunities for AI court experts to communicate, creating incentives for courts to join the interoperable system, and partnering with universities and the private sector in the development of tools.115

To be sure, there are differences between the U.S. and Brazilian court systems, and what works for one country in regulating in light of legal technology will not necessarily work for another.116 Even so, Brazil’s efforts can serve as a helpful reference point in exploring opportunities for interoperable legal AI, as opposed to just data, in the U.S. court system,117 an analysis that has not yet been explored in the legal literature on AI and access to justice. Significantly, the Brazil Report outlined a process for implementing an official uniform electronic system that converts, digitalizes, and authenticates documents across the court system.118 The Electronic Judicial Process (PJe) was developed through a partnership between the country’s National Council of Justice and various courts.119 The Brazil Report notes that even courts that preferred their legacy electronic systems agreed to transition to the official uniform system in light of the benefits.120 By 2023, nearly all criminal, civil, and administrative judicial cases in Brazil were managed digitally, with only about 1.1% still paper-based, thanks to the deployment of PJe,121 ultimately allowing courts to leverage AI systems using the digitized data.122

This ecosystem stands in stark contrast to the court system’s fragmented past. A more recent report by Brazil’s National Council of Justice observed that “[p]rior to the establishment of the PJe as the national standard, individual courts developed their own procedural systems,” which “evolved into a complex landscape of derivative systems with local variations.”123 It further noted that “[t]hese inconsistencies led to a situation where the PJe implemented in different courts diverged from the national version, hindering communication and data exchange between them.”124

The Brazil Report offers a useful illustration of how many of the pillars of interoperability translate to the court system and the rise of AI. In modeling its National Interoperability Model on the European Union’s Interoperability Framework, Brazil focuses on “technical interoperability, syntax (formatting and processing data), and semantics (network architecture).”125 From a technical standpoint, the Brazil plan envisioned embracing a common “factory for AI models” that allows “courts that do not have in-house technology teams to scale algorithms for their operations,” thereby facilitating “an open platform for AI development.”126 From an organizational standpoint, the Brazil Report also called for a national “Laboratory for Innovation in the Electronic Judicial Process,” which would assemble national datasets to train tools, centralize AI expertise, and facilitate the sharing of information, including AI models and algorithms.127 It also recognized the need for a centralized organization “to guide and manage the integration,” including by “creating a roadmap to integrate the AI tools, obtaining commitment from multiple organizations to integrate the AI tools, regular monitoring and evaluation of the integration, providing technical support for the integration, and frequent communication with the organizations.”128

Brazil’s efforts demonstrate how interoperability best practices can apply to expansive court systems to facilitate AI and related digitization efforts, establish necessary governance structures, and ultimately improve court processes, transparency, and outcomes. The interoperability envisioned in the Brazil Report is also reflected in broader National Council of Justice initiatives like its “Justice 4.0 Program,” which “serves as a catalyst for digital transformation within the Brazilian Judiciary” and “aims to guarantee more agile and effective services, ultimately simplifying access to justice for all.”129 The program includes, for example, the Digital Platform of the Judiciary, “a public policy that unifies the management of electronic judicial proceedings across all courts in Brazil, ensuring compatibility between different procedural systems,” as well as the Judiciary’s Single Service Portal, which strives to “allow[] users to access services from any court nationwide within a seamless environment.”130

But Brazil is far from the only international source that could help guide interoperability in the courts. The United States could also look to the European Interoperability Framework, on which Brazil’s model was based, which outlines guidance for interoperable digital public services for European Union member countries.131 The Framework’s Implementation Strategy reflects the complexity of achieving interoperability across multiple jurisdictions, including by describing the importance of identifying the processes by which information crosses borders, as well as developing guidelines for how to better align and simplify those processes.132 The Framework also underscores the need to “[e]ngage stakeholders and raise awareness on interoperability” and “[d]esign and perform communication campaigns promoting the importance of interoperability and benefits from applying the [Framework],”133 an important part of securing the buy-in that Brazil achieved despite initial hesitance from some courts.134 The United States could expect similar hesitance, as the NCSC has noted with its voluntary NODS initiative that U.S. state courts “may decide not to comply for many different reasons,” including that compliance “would disrupt or replace an existing build data process.”135 The European Interoperability Framework further emphasizes that better coordination through interoperability can help “guide the design and development of public services based on users’ needs,”136 which could be especially beneficial for courts when it comes to assessing and meeting the access-to-justice needs of members of the public.

III. mutual benefits for interoperable legal ai and broader ai-governance efforts

Interoperable legal AI would support and be supported by broader emerging efforts in AI governance. For example, courts can look to President Biden’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which identifies principles for the executive branch to follow in its implementation of AI.137 The Order underscores the importance of “robust, reliable, repeatable, and standardized evaluations of AI systems,” as well as ensuring that AI policies are consistent with the administration’s “dedication to advancing equity and civil rights.”138 National interoperable legal AI would also help further the Order’s vision for the United States to “lead the way to global societal, economic, and technological progress, as [it] has in previous eras of disruptive innovation and change,” which requires “pioneering [AI] systems and safeguards needed to deploy technology responsibly—and building and promoting those safeguards with the rest of the world.”139 Such international dialogues have the power to “unlock AI’s potential for good, and promote common approaches to shared challenges.”140 National coordination of interoperable legal AI would be much better suited to facilitate such a dialogue than the current status quo of wide jurisdictional variation and siloing. With regard to AI in the criminal context specifically, a more national approach to legal AI would also help further the Order’s commitment to “[e]nsur[ing] fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.”141

In addition, interoperability would ensure that courts are able to follow emerging ethical AI principles. As outlined in the Brazil Report, such principles include respect for fundamental rights, equal treatment, data security, transparency, and AI under user control,142 also sometimes referred to as “human in the loop,” where “an individual . . . is involved in a single, particular decision made in conjunction with an algorithm”143 and “has the ability to intervene” when needed.144

Moreover, interoperable legal AI would complement a larger shift toward national coordination of legal technology to encourage more standardization of relevant rules, regulations, and design principles. In particular, the prospect of interoperable legal AI highlights the potential for national coordination to overcome the challenges faced by local efforts in light of economic and expertise constraints, as well as empirical challenges stemming from a lack of helpful data at the local level.145

From an economic perspective, it is no secret that courts have faced budgetary challenges for decades,146 and it is not surprising that investments in technology are often nonstarters.147 But investing in new technology would be much less daunting if a jurisdiction could adopt a preexisting national AI framework, as opposed to starting from scratch. Similarly, as issues arise in the design, implementation, and execution of interoperability, centralization of interdisciplinary expertise to guide and respond to inevitable challenges would be a tremendous asset. If jurisdictions continue to “go it alone,” both the volume and the variety of issues will be much higher, stretching experts thin. Moreover, evaluation of data is critical to AI development. With interoperability, more courts could do the same thing with the same technology, improving both the quantity and quality of data collection and evaluation. And when guidance is developed, it will be more widely applicable.

Conclusion

This Essay calls for interoperability to play a more prominent role in efforts to leverage AI to help close the justice gap. It further argues that the courts must be the drivers of such efforts. While it is beyond the scope of this Essay to present a comprehensive roadmap for such an undertaking in the United States, it is worth noting that interoperability can start small. In addition to the NODS initiative, which is more narrowly focused on data standards, some jurisdictions implementing or exploring regulatory sandboxes for technology-driven legal services have contemplated using compatible data-collection methods to facilitate data sharing.148 Similarly, partnerships between early innovators in court AI could serve as a model for jurisdictions that might then be more inclined to join forces. Eventually, a national entity could more realistically standardize such efforts on a larger scale, similar to the centralization envisioned in Brazil’s plan. Of course, other issues will warrant attention along the way, including the intersection of interoperability and intellectual property. Above all, this Essay aims to underscore the urgency of elevating AI interoperability within discussions on court data, legal technology, regulatory reform, and access to justice, with the goal of developing a more integrated and unified approach to building a fairer legal system.

Assistant Professor and Clute-Holleran Scholar in Corporate Law, Gonzaga University School of Law. The author thanks Marlee Carpenter for her valuable research assistance.

1

[1]. The Justice Gap: The Unmet Civil Legal Needs of Low-Income Americans, Legal Servs. Corp. 11 (Apr. 2022) [hereinafter Justice Gap], https://lsc-live.app.box.com/s/xl2v2uraiotbbzrhuwtjlgi0emp3myz1 [https://perma.cc/PF2U-FRCM] (“[D]ata suggest that income disparities in the justice gap between low- and higher-income Americans are exacerbated for pandemic-related civil legal problems.”).

2

[2]. See, e.g., Raymond H. Brescia, Walter McCarthy, Ashley McDonald, Kellan Potts & Cassandra Rivais, Embracing Disruption: How Technological Change in the Delivery of Legal Services Can Improve Access to Justice, 78 Alb. L. Rev. 553, 588 (2015) (describing how “[t]he ‘Great Recession’ of 2008 increased the need for legal services for low- and moderate-income individuals”).

3

[3]. See Justice Gap, supra note 1, at 7 (“Low-income Americans do not get any or enough legal help for 92% of their substantial civil legal problems.”).

4

[4]. Fairness does not only mean fair outcomes in individual cases; fairness must also be a visible standard in society. A.D. (Dory) Reiling, Courts and Artificial Intelligence, 11 Int’l J. for Ct. Admin.1, 2 (2020) (“[While a]dministering justice means delivering justice in individual cases, [] the judiciary also has a shadow function in presenting standards to society more broadly.”).

5

[5]. See, e.g., Brescia et al., supra note 2, at 553-54 (describing how “[t]echnology has supercharged the ability of lawyers,” widening “access to justice in communities desperate for legal assistance”); Kristen Sonday, Tech-Enabled A2J: From Text to Machine Learning, How Legal Aid Is Leveraging Technology to Increase Access to Justice, Thomson Reuters (Feb. 4, 2020), https://www.thomsonreuters.com/en-us/posts/legal/tech-enabled-a2j-legal-aid [https://perma.cc/4DAE-XXA6] (arguing that “there is no doubt that [legal technology] tools, when applied correctly, will make meaningful strides in the way clients actually access justice”).

6

[6]. This Essay’s discussion of “courts” can also be more broadly applied to judicial and adjudicative functions across state and federal governments, including at agencies.

7

[7]. See, e.g., supra note 5; Agnieszka McPeak, Disruptive Technology and the Ethical Lawyer, 50 U. Tol. L. Rev. 457, 466 (2019) (explaining how legal tech can “streamline legal-related tasks,” yield more accurate results, decrease costs, and increase overall efficiency); Sherley E. Cruz, Coding for Cultural Competency: Expanding Access to Justice with Technology, 86 Tenn. L. Rev. 347, 364 (2019) (explaining how chatbots “connect individuals to legal service providers after the program helps the individual identify their legal issue”).

8

[8]. See, e.g., Clark D. Asay, Artificial Stupidity, 61 Wm. & Mary L. Rev. 1187, 1193 (2020) (“Our computerized world is thus plagued with an artificial stupidity confined to carrying out particular, narrow tasks, and not often very well.”); Kristin B. Sandvik, Is Legal Technology a New ‘Moment’ in the Law and Development Trajectory?, PRIO Blogs (Feb. 14, 2020), https://blogs.prio.org/2020/02/is-legal-technology-a-new-moment-in-the-law-and-development-trajectory [https://perma.cc/2XR2-5SAK] (critiquing theories “espousing optimistic and frequently utopian claims about the capacity of technology to improve legal practice, make it more affordable and accessible and lower the price of legal services”).

9

[9]. See, e.g., Amy B. Cyphert, A Human Being Wrote This Law Review Article: GPT-3 and the Practice of Law, 55 U.C. Davis L. Rev. 401, 404, 411-16 (2021) (explaining the racist outputs can result from GPT due to its internet-scraping practices); Cruz, supra note 7, at 399 (“[W]ithout careful coding considerations, legal technologies that integrate artificial intelligence . . . into their decision-making programs run the risk of producing racially biased results.”); Cruz, supra note 7, at 399 (“Technology is not helpful if the end result harms the communities it is employed to assist.”); Emily S. Taylor Poppe, The Future Is Bright Complicated: AI, Apps & Access to Justice, 72 Okla. L. Rev. 185, 186 (2019) (describing the “potential of legal technology to reproduce, rather than ameliorate, existing social inequalities”).

10

[10]. See, e.g., Task Force on the Delivery of Legal Services: Report and Recommendations, Ariz. Sup. Ct. 9 (Oct. 4, 2019) [hereinafter Arizona Task Force Report], https://www.azcourts.gov/Portals/74/LSTF/Report/LSTFReportRecommendationsRED10042019.pdf [https://perma.cc/F6A9-A8S4] (observing how “technology-based and artificial intelligence platforms have stepped in to serve clients” in light of “the large market for legal services left unserved by lawyers”); Sonday, supra note 5 (highlighting the ways that legal-services organizations “are not wasting any time in applying new technologies like AI and machine learning to access to justice issues”).

11

[11]. 2023 Report on the State of the Legal Market: Mixed Results and Growing Uncertainty, Thomson Reuters Inst. and Geo. L. Ctr. Ethics & Legal Pro. (Jan. 9, 2023), https://www.thomsonreuters.com/en-us/posts/wp-content/uploads/sites/20/2023/01/2023-State-of-the-Legal-Market.pdf [https://perma.cc/MAK6-KMA7].

12

[12]. Justice Gap, supra note 1, at 7.

13

[13]. See, e.g., The Future of Legal Services in Oregon, Or. State Bar Futures Task Force 3 (June 2017), http://www.osbar.org/_docs/resources/taskforces/futures/futurestf_summary.pdf [https://perma.cc/VX6J-4L25] (observing how people in Oregon are bypassing traditional legal services by “using ‘intelligent’ online software to create their own wills, trusts,” and other legal documents).

14

[14]. Id. (describing how “‘intelligent’ online software [can] create . . . ’routine’ legal documents that [users] believe are sufficient to meet their needs”).

15

[15]. See Amy J. Schmitz, Measuring “Access to Justice” in the Rush to Digitize, 88 Fordham L. Rev. 2381, 2393 (2020) (“[M]ost justiciable issues that arise in society never get as far as consultation with a lawyer, let alone reach the courts.”); Richard Susskind, The End of Lawyers?: Rethinking the Nature of Legal Services 90 (2008) (“[M]any lawyers exaggerate the extent to which their performance depends on deep expertise . . . . Lawyers often overstate the extent to which the content of their work is creative, strategic, and novel.”).

16

[16]. See, e.g., Rebecca L. Sandefur, Legal Advice from Nonlawyers: Consumer Demand, Provider Quality, and Public Harms, 16 Stan. J. C.R. & C.L. 283, 312 (2020) (“Consumers value and purchase legal services from providers who are not fully qualified attorneys. The legal work produced by nonlawyers can be as good as—and sometimes better than—that of lawyers.”).

17

[17]. See Samuel Gibbs, Chatbot Lawyer Overturns 160,000 Parking Tickets in London and New York, Guardian (June 28, 2016, 6:07 EDT), https://www.theguardian.com/technology/2016/jun/28/chatbot-ai-lawyer-donotpay-parking-tickets-london-new-york [https://perma.cc/UKZ8-KMFS].

18

[18]. See Andrew Smith, Hello Prenup Update | Shark Tank Season 13, Shark Tank Recap (Oct. 29, 2023), https://sharktankrecap.com/shark-tank-hello-prenup-update-season-13 [https://perma.cc/ZQ4S-35UF].

19

[19] See Rasa Legal, https://www.rasa-legal.com [https://perma.cc/9P4S-M4WH].

20

[20]. Lauren Moxley, Zooming Past the Monopoly: A Consumer Rights Approach to Reforming the Lawyer’s Monopoly and Improving Access to Justice, 9 Harv. L. & Pol’y Rev. 553, 554 (2015); see LegalZoom, https://www.legalzoom.com [https://perma.cc/8YQ8-P2ZJ].

21

[21] See, e.g., DoNotPay Honored with ABA Brown Award for Access to Justice Efforts, Am. Bar Ass’n (Jan. 23, 2020), https://www.americanbar.org/news/abanews/aba-news-archives/2020/01/donotpay-honored-with-aba-brown-award-for-access-to-justice-effo/?login [https://perma.cc/9CCC-8F6P].

22

[22] See Debra Cassens Weiss, ‘Robot Lawyer’ DoNotPay Reaches Settlement in Suit Alleging It’s Neither Robot nor Lawyer, A.B.A. J. (June 11, 2024, 3:47 PM CDT), https://www.abajournal.com/news/article/robot-lawyer-donotpay-reaches-settlement-in-suit-alleging-it-is-neither-a-robot-nor-a-lawyer [https://perma.cc/WPM6-STX2].

23

[23]. See Cruz, supra note 7, at 364 (describing the ability of AI-driven “chatbots” to “help[] the individual identify their legal issue” and then “connect individuals to legal service providers”); The Future of Legal Services in Oregon, supra note 13, at 5 (describing the development of “sophisticated referral networks” that are part of new “online service delivery models”).

24

[24]. See Poppe, supra note 9, at 188 (explaining that within the context of legal AI and applications we are seeing a “disaggregation [of legal work that] creates the possibility for multiple sources of legal information and services”).

25

[25]. Filing Fairness Project, Stan. L. Sch. Legal Design Lab’y, https://filingfairnessproject.law.stanford.edu [https://perma.cc/2GG6-BJC7].

26

[26]. See Stephanos Bibas, Lawyers’ Monopoly and the Promise of AI, 134 Yale L.J.F. 920, 920-22 (2025) (exploring the relationship between jurisdictional unauthorized practice of law rules and AI).

27

[27]. See Model Rules of Pro. Conduct r. 5.4(b) (Am. Bar. Ass’n 2020); Andrew M. Perlman, Towards the Law of Legal Services, 37 Cardozo L. Rev. 49, 75-83 (2015) (recounting the ABA’s history of resistance to amending Model Rule 5.4).

28

[28]. See generally Ralph Baxter, Dereliction of Duty: State-Bar Inaction in Response to America’s Access-to-Justice Crisis, 132 Yale L.J.F. 228, 256-57 (2022) (exploring state-bar policies that hinder access to justice in American courts).

29

[29]. See McPeak, supra note 7, at 461 (observing how legal technology is changing the way lawyers work and that it “may fundamentally alter law practice entirely”).

30

[30]. See, e.g., id. at 466 (describing how legal technology, including AI, can “streamline legal-related tasks,” leading to “more accurate results, for less cost, and in a much quicker timeframe”).

31

[31]. See Nicole Black, What You Need to Know About Virtual and Chatbot Assistants for Lawyers, A.B.A. J. (Jan. 27, 2020, 6:00 AM), https://www.abajournal.com/web/article/what-you-need-to-know-about-virtual-and-chatbot-assistants-for-lawyers [https://perma.cc/C5L6-69TU].

32

[32]. See Blake A. Klinkner, Artificial Intelligence and the Future of Law Office Data Practices, Wyo. Law. (Apr. 2023) https://digitaleditions.walsworth.com/publication/?i=788527&article_id=4553439&view=articleBrowser [https://perma.cc/BWE3-28UM] (“Although artificial intelligence has eliminated some attorney positions and billable hours associated with eDiscovery, the simple reality is that artificial intelligence is the future of discovery, and attorneys should embrace the efficiencies that artificial intelligence provides in processing discovery.”).

33

[33]. See Ed Walters, The Model Rules of Autonomous Conduct: Ethical Responsibilities of Lawyers and Artificial Intelligence, 35 Ga. St. U. L. Rev. 1073, 1077 (2019) (describing natural language searching and the highly individualized results that platforms can produce).

34

[34]. See Brescia et al., supra note 2, at 572 (explaining how AI can “create legal arguments based on predictive tools about a particular type of case”).

35

[35]. See, e.g., Sara Merken, New York Lawyers Sanctioned for Using Fake ChatGPT Cases in Legal Brief, Reuters, (June 26, 2023, 4:28 AM EDT), https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22 [https://perma.cc/H5RX-RMTA].

36

[36]. See Brescia et al., supra note 2, at 554 (“Many assess the impact of these disruptions on the delivery of services to wealthier clients and corporations.”); see also Drew Simshaw, Toward National Regulation of Legal Technology: A Path Forward for Access to Justice, 92 Fordham L. Rev. 1, 13 (2023) (describing how large law firms serving wealthy clients and corporations may be better situated to integrate these technologies into their service delivery and business models because they “have greater resources to pursue emerging legal technology, can hire in-house information technology personnel or outside consultants, and have access to more specifically-tailored all-inclusive services”).

37

[37]. See, e.g., Debra Cassens Weiss, Meet Harvey, BigLaw Firm’s Artificial Intelligence Platform Based on ChatGPT, A.B.A. J. (Feb. 17, 2023, 9:50 AM CST), https://www.abajournal.com/news/article/meet-harvey-biglaw-firms-artificial-intelligence-platform-based-on-chatgpt [https://perma.cc/546E-5RBU].

38

[38]. See Bob Ambrogi, The Justice Gap in Legal Tech: A Tale of Two Conference and the Implications for A2J, LawSites (Feb. 5, 2024), https://www.lawnext.com/2024/02/the-justice-gap-in-legal-tech-a-tale-of-two-conferences-and-the-implications-for-a2j.html [https://perma.cc/KF2C-8YLV].

39

[39]. Id.

40

[40]. See Jordan Furlong, Reflections: The New Legal Economy: What Will Lawyers Do?, Wis. Law. (Feb. 11, 2020), https://www.wisbar.org/NewsPublications/WisconsinLawyer/Pages/Article.aspx [https://perma.cc/XY2K-BR6H] (arguing that “rich people and large in-house law departments will experience a golden age of law,” while those in other settings will not).

41

[41]. See Simshaw, supra note 36, at 15-21 (identifying the resource, resilience, and relationship barriers to effectively “calibrating” legal AI for access to justice).

42

[42]. Cary Coglianese & Lavi M. Ben Dor, AI in Adjudication and Administration, 86 Brook. L. Rev. 791, 797 (2021) (“The most widespread technological innovation in the courts in recent years has manifested in the use of various forms of digitization (such as electronic filing and case management).”).

43

[43]. See id. at 798-99; see also Arizona Task Force Report, supra note 10, at 9 (describing how Arizona has “turned to technology to help bridge the justice gap,” including by “implementing a virtual resource center . . . with legal information sheets and legal information videos”).

44

[44]. See Reiling, supra note 4, at 3-4. Reiling also notes that in complex cases, “the need for information technology mainly consists of knowledge systems that make legal sources easily accessible, and a digital case file that can present large amounts of information in an accessible manner.” Id. at 3.

45

[45]. See id. at 3 (describing the Netherland’s Public Prosecution Service, which handles all criminal cases that do not require judgment, and imagining AI’s potential to “help people resolve more of their problems by themselves and thus prevent disputes or court cases”).

46

[46]. See Coglianese & Ben Dor, supra note 42, at 811-13; Samuel D. Hodge, Jr., Is the Use of Artificial Intelligence in Alternative Dispute Resolution a Viable Option or Wishful Thinking?, 24 Pepp. Disp. Resol. L.J. 91, 101 (2024) (explaining that online dispute resolution “shares and builds upon the foundation of alternative dispute resolution, underscoring more straightforward and timesaving ways of addressing conflict”).

47

[47]. See Coglianese & Ben Dor, supra note 42, at 812-13 (explaining that court use of ODR ranges “from a simple website that facilitates entering pleas for traffic tickets online to an online portal for engaging in asynchronous negotiations”); Julianne Dardanes, Comment, When Accessing Justice Requires Absence from the Courthouse: Utah’s Online Dispute Resolution Program and the Impact It Will Have on Pro Se Litigants, 21 Pepp. Disp. Resol. L.J. 141, 143 (2021) (describing how Utah’s online dispute resolution addresses “claimants lacking access to representation and information” and “overcrowded dockets”).

48

[48]. Coglianese & Ben Dor, supra note 42, at 797 (“Some courts also recognize a role for online dispute resolution systems developed by the private sector.”).

49

[49]. National Open Court Data Standards (NODS), Nat’l Ctr. for State Cts., https://www.ncsc.org/consulting-and-research/areas-of-expertise/data/national-open-court-data-standards-nods [https://perma.cc/U46E-EYJT].

50

[50] See id. (recognizing that “[d]emands for court data are growing dramatically, particularly as courts implement electronic record systems” and that “[b]oth public and private organizations are aggressively putting pressure on courts to make court data and legal documents publicly accessible,” and outlining the initiative’s responsive purposes).

51

[51] Id.

52

[52] See, e.g., David Freeman Engstrom & R.J. Vogt, The New Judicial Governance: Courts, Data, and the Future of Civil Justice, 72 DePaul L. Rev. 171, 176-77 (2023); David Colarusso & Erika J. Rickard, Speaking the Same Language: Data Standards and Disruptive Technologies in the Administration of Justice, 50 Suffolk U. L. Rev. 387, 387-90 (2017) (analyzing the role of state courts in “the lack of clearly-defined judicial data standards”).

53

[53]. See Cinara Maria Carneiro Rocha & Antonio Henrique Graciano Suxberger, Enablers of Electronic Judicial Process in Brazil, 2023 CAPSI Proceedings 186, 186 (“The public legal sector has been slower than other government sectors to integrate IT into its activity”); Coglianese & Ben Dor, supra note 42, at 838 (“[W]ith the continued reliance on machine learning in other spheres of life, the public acceptability of, if not demand for, its use in the governmental sector may only increase.”).

54

[54]. See, e.g., Nat’l Ctr. for State Cts., https://www.ncsc.org [https://perma.cc/7ZMV-H55M] (describing itself as striving to “drive innovation and progress in courts”); Nat’l Ass’n for Ct. Mgmt., https://nacmnet.org [https://perma.cc/YB3U-LE89]; Nat’l Jud. Coll., https://www.judges.org [https://perma.cc/WF2C-H3AS] (serving and providing courses for “state trial court judges, administrative law judges, limited jurisdiction judges, military judges, tribal judges, even commissioners of licensing bodies”).

55

[55]. Coglianese & Ben Dor, supra note 42, at 793.

56

[56]. See id. at 794 (“Decisions about digital technologies used by courts throughout the United States are . . . made by a plethora of institutions and actors.”).

57

[57]. Id. at 794 n.10 (explaining that “[t]his estimate is based on a telephone and email exchange with [the National Center for State Courts] staff, and it includes a vast number of municipal courts,” and that “the uncertainty reflected in the range (rather than a point estimate) is apparently due to fairly regular changes in the size and organization of municipal courts”).

58

[58]. Id. at 794.

59

[59]. See, e.g., Brandon L. Garrett & John Monahan, Judging Risk, 108 Calif. L. Rev. 439, 441-44 (2020).

60

[60]. Id. at 448-49.

61

[61]. See id. at 450.

62

[62]. Coglianese & Ben Dor, supra note 42, at 801 (citing National Landscape, Mapping Pretrial Injustice, https://pretrialrisk.com/national-landscape [https://perma.cc/Q4KE-K5SR]).

63

[63]. Id. at 803-04.

64

[64]. See, e.g., Cade Metz & Adam Satariano, An Algorithm that Grants Freedom, or Takes It Away, N.Y. Times (Feb. 7, 2020), https://www.nytimes.com/2020/02/06/technology/predictive-algorithms-crime.html [https://perma.cc/6USW-DTRY].

65

[65]. See Kristen Sonday, Tech-Enabled A2J: How Tech Is Helping Pro Se Litigants Navigate the Courts, Thomson Reuters (Aug. 17, 2020), https://www.thomsonreuters.com/en-us/posts/legal/tech-enabled-a2j-pro-se-litigants [https://perma.cc/6NEP-A4A2].

66

[66]. Id. (quoting Quinten Steenhuis).

67

[67]. See Colleen F. Shanahan, Anna E. Carpenter, Simplified Courts Can’t Solve Inequality, 148 Daedalus 128, 128-35 (2019) (“[T]he volume of cases in state civil courts overwhelms their resources. The number of civil cases brought to state courts hovers around twenty million per year. This number would be even greater if all civil problems were brought to court . . . .”).

68

[68] See id.

69

[69]. See Simshaw, supra note 36, at 7-8.

70

[70]. See Simshaw, supra note 36, at 13; see also Brescia et al., supra note 2, at 605 (“[O]ne must ask the question: are these types of innovations a ‘substitute’ for true access to justice? In many respects, the clear answer is ‘no.’”); Brescia et al., supra note 2, at 605 (“Representation by an attorney provides not just competent but zealous services rendered in a way that is unique to the needs of the individual, and those services are backed up by the disciplinary machinery that ensures they are rendered in a way that satisfies the attorney’s ethical obligations to the individual.”).

71

[71]. See Simshaw, supra note 36, at 13.

72

[72]. See Simshaw supra note 36, at 13-14; Brescia et al., supra note 2, at 554 (describing how “[m]any assess the impact of these disruptions on the delivery of services to wealthier clients and corporations”); Jordan Furlong, Reflections: The New Legal Economy: What Will Lawyers Do?, Wis. Law. (Feb. 11, 2020), https://www.wisbar.org/NewsPublications/WisconsinLawyer/Pages/Article.aspx [https://perma.cc/2K3Y-EWBM] (arguing that, with the rise of legal technology, “rich people and large in-house law departments will experience a golden age of law,” but others will not).

73

[73]. See Simshaw supra note 36, at 14-15; Rebecca Kunkel, Rationing Justice in the 21st Century: Technocracy and Technology in the Access to Justice Movement, 18 U. Md. L.J. Race, Religion, Gender & Class 366, 386 (2019) (criticizing a “rather bold assumption that technology will necessarily deliver on [the] promise of efficiency”).

74

[74]. See Cyphert, supra note 9, at 404 (describing how such scraping can result in “toxic outputs”).

75

[75]. Daniel N. Kluttz & Deirdre K. Mulligan, Automated Decision Support Technologies and the Legal Profession, 34 Berkeley Tech. L.J. 853, 862 (2019).

76

[76]. Cruz, supra note 7, at 399 (explaining that “without careful coding considerations, legal technologies that integrate artificial intelligence . . . into their decision-making programs run the risk of producing racially biased results”).

77

[77]. See David Freeman Engstrom, Daniel E. Ho, Catherine M. Sharkey, & Mariano-Florentino Cuéllar, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, Admin. Conf. of the U.S. 79-81 (Feb. 2020), https://www.acus.gov/sites/default/files/documents/Government%20by%20Algorithm.pdf [ https://perma.cc/86XH-YNF9].

78

[78]. Stacy A. Baird, Government Role and the Interoperability Ecosystem, 5 I/S: J.L. & Pol’y for Info. Soc’y 219, 223 (2009).

79

[79]. See id. at 222 (explaining that “the ability to achieve meaningful technical interoperability largely depends upon the health of the broader ‘interoperability ecosystem’” and introducing the “five key aspects to an ‘interoperability ecosystem’”).

80

[80]. Id. at 231-32 (quoting Harry Newton, Newton’s Telecom Dictionary 389 (18th ed. 2002)).

81

[81]. See id. at 232.

82

[82]. See Katie Brehm, Momori Hirabayashi, Clara Langevin, Bernardo Rivera Muñozcano, Katsumi Sekizawa & Jiayi Zhu, The Future of AI in the Brazilian Judicial System, Colum. Sch. of Int’l & Pub. Affs. 28 (2020) [hereinafter Brazil Report], https://d26k070p771odc.cloudfront.net/wp-content/uploads/2020/06/SIPA-Capstone-The-Future-of-AI-in-the-Brazilian-Judicial-System-1.pdf [https://perma.cc/8MHC-T9RC] (“If the judicial system can successfully adopt [open-source software (OSS)] to their current system, OSS will be able to increase the judicial system’s transparency and collaboration among the public sector and the civil society.”).

83

[83]. See Baird, supra note 78, at 232.

84

[84]. See Reiling, supra note 4, at 2 (“[T]he work of courts and judges is to process information; parties bring information to the court, transformations take place in the course of the procedure, and the outcome is also information.”).

85

[85]. Cf. Eli Wald, The Access and Justice Imperatives of the Rules of Professional Conduct, 35 Geo. J. Legal Ethics 375, 415-21 (2022) (arguing that the ABA should revise the Model Rules of Professional Conduct to more seriously address the access-to-justice gap).

86

[86] Model Rules of Pro. Conduct pmbl. (Am. Bar Ass’n 2020).

87

[87]. Id. r. 6.1.

88

[88]. Id. r. 1.5(a).

89

[89]. Baird, supra note 78, at 232.

90

[90]. Id.

91

[91]. See, e.g., Gillian K. Hadfield & Deborah L. Rhode, How to Regulate Legal Services to Promote Access, Innovation, and the Quality of Lawyering, 67 Hastings L.J. 1191, 1195 (2016); James M. McCauley, The Future of the Practice of Law: Can Alternative Business Structures for the Legal Profession Improve Access to Legal Services?, 51 U. Rich. L. Rev. 53, 55, 59 (2016); Rebecca Love Kourlis & Neil M. Gorsuch, Legal Advice Is Often Unaffordable. Here’s How More People Can Get Help, USA TODAY (Sept. 17, 2020, 3:15 AM ET), https://www.usatoday.com/story/opinion/2020/09/17/lawyers-expensive-competition-innovation-increase-access-gorsuch-column/5817467002 [https://perma.cc/J5P6-B3Z6] (advocating for lifting law firm ownership and investment restrictions); Resolution 115: Encouraging Regulatory Innovation, Am. Bar Ass’n (Feb. 2020), https://www.americanbar.org/groups/centers_commissions/center-for-innovation/Resolution115 [https://perma.cc/757X-2MP7].

92

[92]. See generally Simshaw, supra note 36 (proposing this reform).

93

[93]. Baird, supra note 78, at 233.

94

[94]. Id.

95

[95]. See Reiling, supra note 4, at 8 (describing how, in the same way that “if a machine is to be able to recognize a cat with 95% certainty, we need about 100,000 pictures of cats,” individual jurisdictions will lack the volume of information needed to make effective use of AI in the legal context (quoting Luc Julia, L’intelligence Artificielle N’existe Pas [Artificial Intelligence Does Not Exist] 123 (2019))); Brazil Report, supra note 82, at 15 (acknowledging in the context of courts that “AI tools require massive amounts of errorless data to train their algorithms”).

96

[96]. Brazil Report, supra note 82, at 15-16.

97

[97]. See Reiling, supra note 4, at 3 (explaining that, in complex cases, “the need for information technology mainly consists of knowledge systems that make legal sources easily accessible, and a digital case file that can present large amounts of information in an accessible manner”).

98

[98]. See id. at 8 (“AI can be used much more effectively once legal information such as court decisions is made machine-processable before publication with textual readability, document structures, identification codes and metadata all available. Adding legal meaning in the form of structured terminology and defined relationships, will further increase the effectiveness of AI in the court process.”).

99

[99]. See SCALES Awarded NSF Grant to Build the Integrated Justice Platform Proto-OKN, Systematic Content Analysis Litig. EventS Open Knowledge Network (Oct. 12, 2023), https://scales-okn.org/2023/10/12/scales-awarded-nsf-grant-to-build-the-integrated-justice-platform-proto-okn [https://perma.cc/4AES-DSYP] (“For a country that has the largest criminal justice system in the world, resolving over 18 million criminal cases each year, the lack of nationally-accessible and linked data available across the United States hinders policymaking and understanding of the criminal justice system.”).

100

[100]Id.

101

[101] Agnieszka McPeak, Disruptive Technology and the Ethical Lawyer, 50 U. Tol. L. Rev. 457, 467 (2019) (arguing that AI can help combat bias in the legal system by “eliminating some extraneous factors from decision-making” and “unearth[ing] the extra-legal (and perhaps improper) factors that judges might be using in making decisions”).

102

[102]. See Reiling, supra note 4, at 9 (“For courts and court systems, largely set up and run as production organisations, this kind of [AI] development work is a huge new task.”).

103

[103]. See id. at 1 (“Complexity reduction is at the heart of court processes, irrespective of subject matter.”).

104

[104]. See Baird, supra note 78, at 233.

105

[105]. See supra notes 74-78 and accompanying text.

106

[106]. See supra notes 59-64 and accompanying text.

107

[107]. See Brazil Report, supra note 82, at 4, 8 (“Brazil has the largest judiciary system in the world with 92 courts; each court within the system receives a large volume of lawsuits every day.”).

108

[108]. Id. at 4.

109

[109]. Id.

110

[110]. Id. at 8.

111

[111]. Id.

112

[112]. Id. at 13.

113

[113] Id. at 4.

114

[114] Id.

115

[115] Id. at 5.

116

[116]. See Laurel S. Terry, Steve Mark & Tahlia Gordon, Trends and Challenges in Lawyer Regulation: The Impact of Globalization and Technology, 80 Fordham L. Rev. 2661, 2667 (2012) (noting that, within the context of emerging technologies in law, “when comparing regulation from one country to another, one must be sure to consider whether one is examining comparable instruments and approaches”).

117

[117]. See Rocha & Suxberger, supra note 53, at 187 (observing in the context of “eJustice” more broadly that, “[b]esides the legal tradition and cultural differences between countries, the Brazilian case set some landmarks that may be common and help others to address their reforms”).

118

[118] Id. at 11.

119

[119]. Brazil Report, supra note 82, at 11.

120

[120]. Id. at 12.

121

[121]. See Rocha & Suxberger, supra note 53, at 186-87.

122

[122]. See id. at 193 (“Data generated from standardisation and [Electronic Judicial Process] actual use . . . permitted many Brazilian states to use Artificial Intelligence (AI) systems to enhance court performance.”).

123

[123] Brazil Nat’l Council of Just., The Justice 4.0 Program 13 (Aline Lorena Tolosa trans., 2024).

124

[124]. Id.

125

[125]. Brazil Report, supra note 82, at 12.

126

[126]. Id. at 12.

127

[127]. Id.

128

[128]. Id. at 16.

129

[129]. See Brazil Nat’l Council of Just., supra note 123, at 17.

130

[130]. Id. at 20.

131

[131]. See generally European Interoperability Framework - Implementation Strategy, Interoperability Action Plan, Eur. Comm’n (Mar. 23, 2017) [hereinafter EIF Implementation Strategy], https://eur-lex.europa.eu/resource.html?uri=cellar:2c2f2554-0faf-11e7-8a35-01aa75ed71a1.0017.02/DOC_2&format=PDF [https://perma.cc/F5WW-L8KF] (laying out the European Union’s interoperability framework). For the full European Interoperability Framework, see New European Interoperability Framework - Promoting Seamless Services and Data Flows for European Public Administrations, Eur. Comm’n (2017), https://ec.europa.eu/isa2/sites/default/files/eif_brochure_final.pdf [https://perma.cc/DC8Y-46NU].

132

[132]. EIF Implementation Strategy, supra note 131, at 3 (describing actions for achieving the goal of “[d]evelop[ing] organisational interoperable solutions”).

133

[133]. Id.

134

[134]. See supra note 113 and accompanying text.

135

[135] National Open Court Data Standards, supra note 49.

136

[136]. EIF Implementation Strategy, supra note 131, at 3.

137

[137]. Exec. Order No. 14,110, 88 Fed. Reg. 75191 (Oct. 30, 2023).

138

[138]. Id.

139

[139]. Id.

140

[140]. Id.

141

[141]. Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, White House (Oct. 30, 2023), https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence [https://perma.cc/2ZXU-UVYU].

142

[142]. See Reiling, supra note 4, at 6-7 (summarizing the more than twenty-five sets of ethical principles for AI that have emerged in recent years).

143

[143] Rebecca Crootof, Margot E. Kaminski & W. Nicholson Price II, Humans in the Loop, 76 Vand. L. Rev. 429, 440 (2023).

144

[144]. Id. at 441.

145

[145]. See generally Simshaw, supra note 36 (arguing that “regulatory reform processes” governing legal services “should be explored at the national level”).

146

[146]. See, e.g., Robert J. Derocher, Crisis in the Courts: Bars Take Steps to Stave off Judicial Funding Cuts, ABA Bar Leader (May-June 2010), https://www.americanbar.org/groups/bar-leadership/publications/bar_leader/2009_10/may_june/courtcrisis [https://perma.cc/T8Z8-NTN5] (“Deep cuts in spending for the judiciary over the last year have triggered layoffs, salary cuts, deferred maintenance, and court closures nationwide, with even deeper cuts on the horizon as states grapple with a weak economy that continues to widen budget gaps.”); Nate Raymond, US Judiciary Set to Receive Modest Spending Boost from Congress, Reuters (Mar. 21, 2024, 5:11 PM EDT), https://www.reuters.com/legal/government/us-judiciary-set-receive-modest-spending-boost-congress-2024-03-21 [https://perma.cc/F4F2-PSLQ] (explaining that the small increase in federal-judiciary funding included in a proposed spending package “falls short of what the court system had sought”).

147

[147]. See Jumpei Komoda, Designing AI for Courts, 29 Rich. J.L. & Tech. 145, 165 (2023) (explaining that “[c]ommercial viability is one concern for AI implementation,” and that “[i]mplementing AI will not only require development costs, but also additional resources to create the data infrastructure and to manage frequent system upgrades”).

148

[148]. Blueprint for a Legal Regulatory Lab in Washington State, Wash. Cts. Prac. of L. Bd. 10 (Feb. 2022), https://www.wsba.org/docs/default-source/legal-community/committees/practice-of-law-board/practice-of-law-board_lab-blueprint_02-11-2022.pdf [https://perma.cc/HP4G-WM6B] (envisioning that “most of the data [collected in Washington’s envisioned regulatory sandbox] will be collected in the same format to potentially facilitate cross‑jurisdiction data analysis, and possible future reciprocity with other states such as Utah”).


News