The Yale Law Journal

VOLUME
134
2024-2025
Forum

Interoperable Legal AI for Access to Justice

14 Mar 2025

abstract. The access-to-justice gap is growing, affecting individuals with both civil and criminal needs in the United States. Though these challenges are multifaceted, procedural barriers in the U.S. legal system can often inhibit access-to-justice efforts. The resulting inequities undermine fairness for those interacting with courts and jeopardize the legitimacy of the broader legal system. Legal technology driven by artificial intelligence (AI) has been heralded for its potential to combat these challenges on three access-to-justice fronts that are often conceptualized in isolation: a consumer (i.e., self-help) front, a legal-service-provider front, and a court front. Progress on each of these fronts is apparent, though not at the pace or scale necessary to make meaningful inroads into closing the justice gap nationwide. The time has come to appreciate that, although progress on all three fronts is necessary for closing the justice gap and maximizing fairness, it is insufficient if there is not also some level of shared commitment and coordination across—and not just within—all fronts. This Essay argues that technological and procedural legal interoperability—that is, widespread consistency in technology design and related processes—should be at the forefront of these efforts, particularly as they relate to artificial intelligence. Further, although the consumer and legal-services fronts remain critically important, courts should be recognized as the necessary drivers in achieving this interoperable legal AI.

Introduction

The access-to-justice gap is growing. The COVID-19 pandemic1 and economic recessions exacerbated the crisis,2 and millions of Americans still lack access to resources to meet their civil legal needs.3 At the same time, the United States’s criminal-justice system continues to struggle with overworked public defenders and underresourced court systems, resulting in massive case backlogs. Though these challenges are multifaceted, procedural barriers in the U.S. legal system often inhibit access-to-justice efforts and deserve special attention. The resulting inequities undermine fairness for those interacting with courts and jeopardize the legitimacy of the courts’ processes and the legal system more broadly.4 This is an avoidable fate.

Legal technology driven by artificial intelligence (AI) has been heralded for its potential to combat these challenges on three access-to-justice fronts that are often conceptualized in isolation.5 First, AI has the potential to revolutionize how consumers identify, navigate, and ultimately solve their legal problems, either by helping them to do so on their own (so-called “self-help” tools) or by connecting them with legal professionals. Second, AI has the potential to empower legal-service providers to serve more consumers and achieve better outcomes. And third, AI has been envisioned as a promising means by which to streamline and improve courts’ legal processes that have historically limited access and hindered fair outcomes.6

It is no secret that progress must be made on all three of these fronts to maximize access and fairness. Indeed, much attention has been paid—and rightfully so—to the potential impact of enhanced legal-AI tools.7 But the impacts of these developments will likely be limited if court processes are not streamlined to account for the increased volume, variety, and technology-driven nature of cases. Similarly, the impact of AI-driven processes for legal-service providers may be limited if consumers cannot meaningfully participate in problem solving through the new media used by their providers and the courts. The inverse is true as well—progress in the courts will be meaningless if lawyers or litigants are unable to access or use AI tools effectively.

To date, legal scholarship has advocated for progress on each of these fronts. While progress is apparent, it is not taking place at the pace or scale necessary to make meaningful inroads into closing the justice gap nationwide.8 Access for access’s sake and efficiency for efficiency’s sake will not necessarily result in improvements to fairness, especially if AI is designed and implemented in ways that intentionally or unintentionally automate bias and magnify inequality.9 The time has come to appreciate that, although progress on all three fronts is necessary for closing the justice gap and maximizing fairness, it will be insufficient if there is not also some level of shared commitment and coordination across—and not just within—all fronts.

This Essay argues that technological and procedural legal interoperability—that is, widespread consistency in technology design and related processes—should be at the forefront of these efforts, particularly as they relate to artificial intelligence. Further, although the consumer and legal-services fronts remain critically important, courts should be recognized as the necessary drivers in achieving this interoperable legal AI. Fortunately, there are models for interoperable legal AI in other countries, making what might otherwise seem like a daunting prospect seem more feasible.

Part I begins by describing the potential of AI to make progress on the consumer, legal-service, and court fronts. This Essay then turns to the isolated progress seen to date in each of these areas and the long-term limitations of this progress absent interoperable legal AI. Part II studies Brazil’s focus on interoperability to establish its importance with regard to both technology and other processes across the legal problem-solving landscape.

Finally, in Part III, this Essay argues that courts must be the drivers of interoperable legal AI, underscoring the potential for interoperable legal AI to align with broader efforts in AI governance that would both support and be supported by courts’ efforts. Courts are traditionally followers as opposed to leaders when it comes to implementing new technology. In addition, local regulation of legal services and local variations in legal rules and processes present challenges that are in some ways distinguishable from those of other industries. It will therefore be important to analyze the prospect of interoperable legal AI within broader discussions of legal-regulatory reform, including my proposal for a national legal regulatory “sandbox”—a reform mechanism that would provide temporary safe harbors for testing innovative services and collecting data in areas of regulatory uncertainty. The proposed sandbox would promote standardization, transparency, and, ultimately, the technological and procedural interoperability necessary for AI to reach its potential as a tool to help close the access-to-justice gap and facilitate fair outcomes.

I. the limits of ai efforts with consumers, service providers, and courts

The legal problem-solving landscape has made commendable efforts to leverage legal technologies to make inroads in closing the access-to-justice gap.10 But the results have been too local, are limited in scope, and lack the scalability needed to maximize impact. In 2023, for example, the Georgetown Law Center on Ethics and the Legal Profession concluded that closing the justice gap requires substantial investment from the industry.11 The Legal Services Corporation reported in 2022 that, despite recent efforts, “[l]ow‑income Americans do not get any or enough legal help for 92% of their substantial civil legal problems.”12 This Part analyzes the isolated progress seen to date on each front and the limitations of long-term progress absent interoperable legal AI.

A. The Consumer Front

In some cases, technology-driven tools are helping people solve their own legal problems—ranging from creating their own wills and trusts,13 to drafting routine legal documents,14 to completing other tasks that do not always require the help of a professional.15 Nonprofessional assistance has always been in demand,16 and AI has stepped up to help meet it. A service called DoNotPay, run by a then-undergraduate student at Stanford, made headlines in 2016 when it helped overturn 160,000 parking tickets.17 HelloPrenup, a service designed to help couples with prenuptial agreements, secured $150,000 in investment on the popular television show Shark Tank.18 Rasa, a technology-driven app-based service, helps people in Utah assess their eligibility for expungement of their criminal records and, if eligible, navigate the process with the help of AI-enabled software.19 And, of course, LegalZoom has become almost synonymous with legal self-help, assisting “over two million individuals and small businesses by helping consumers prepare downloadable legal documents such as wills, prenuptial agreements, copyrights, real estate leases, and articles of incorporation.”20 In a landscape of regulatory uncertainty, some of these services have resulted in mixed receptions and results. For example, DoNotPay, which has evolved from helping consumers challenge parking tickets to assisting self-represented litigants in small claims court, has been on the receiving end of both awards for access to justice21 and lawsuits.22 But AI can also help people determine when their case warrants professional assistance and can connect them with appropriate professionals when needed.23 In this sense, we are seeing broader movement toward a democratization of legal information.24

This progress, of course, is a challenging endeavor when jurisdictions vary not only in their laws, but also in their rote procedural requirements, such as the design of their forms, processes for filing, and rules governing the use of AI tools. The Filing Fairness Project, an initiative of the Legal Design Lab at Stanford Law School, advocates for modernizing filing procedures in the civil justice system across multiple state court systems, recognizing that “indecipherable court forms and burdensome filing processes discourage participation and prevent many from asserting their rights.”25 The success of these efforts depends on a broad national commitment to promoting interoperability, which will require interdisciplinary and cross-industry collaboration that is currently inhibited by regulatory uncertainty. Indeed, innovation in this space is stifled by uncertainty as to whether certain tools and services constitute the unauthorized practice of law, which is defined and regulated differently across U.S. jurisdictions.26 In addition, partnerships between lawyers and technologists to develop and provide such tools are often hindered by the nearly universal prohibition in U.S. jurisdictions of nonlawyers holding any ownership interest in a partnership with licensed attorneys.27 Some jurisdictions are exploring regulatory reforms to balance self-help innovation with consumer protection, but not at the speed, scope, or scale necessary.28 As a result, designers of AI self-help tools are left to navigate widely varying terrain concerning both substance and process when trying to develop and deploy these tools on even a small scale.

B. The Legal-Services Front

For those cases that require the help of legal professionals, technology has also made noticeable and commendable strides.29 It is now widely recognized that AI has the ability to increase the efficiency of legal tasks30 ranging from intake,31 to eDiscovery,32 to legal research,33 to developing case strategy,34 and even to assisting with drafting legal documents, though not without high-profile misuses.35

But much of this development is happening in-house at the largest corporate law firms.36 And the most impactful generative-AI developments, like Harvey AI—essentially a more dependable and tailored ChatGPT for lawyers—are designed for and marketed to large firms.37 The “two worlds” of legal-technology development were recently observed in two vastly different legal-technology conferences: a “glitzy celebration of big law tech” at Legalweek, and the very modest and understated Legal Services Corporation’s Innovations in Technology Conference, “devoted to tech for access to justice.”38 Bob Ambrogi described the conferences as illustrative of the “funding gap between those who are developing legal technology to better meet the legal needs of low-income Americans and those who are developing legal tech to serve large law firms and corporate legal departments.”39

While it is possible that such services will trickle down to benefit those outside large law firms, services designed for one setting do not always translate well to others. Large firms may continue to thrive in the “golden age of AI,”40 but other providers will likely continue to struggle with resource, resilience, and relationship barriers that are exacerbated by regulatory uncertainty.41 With the means for in-house partnerships limited by ownership restrictions, and with cross-jurisdictional third-party development less robust and effective than that for large firms, most legal-service providers will face an uphill battle to maximize technology’s effectiveness in this fast-paced and complex ecosystem.

C. The Court Front

Finally, the least discussed but perhaps most important players in this landscape are the courts. The most visible technological innovation in U.S. courts in recent years has been the digitization of court forms, which has facilitated electronic filing and other electronic case management.42 For self-represented litigants, many courts have also made efforts to digitize court documents, post free legal forms online for litigants, and (in fewer jurisdictions) even provide computer kiosks to help people navigate their interactions with the court.43 In civil litigation, courts have also been involved in overseeing eDiscovery practices by litigants, including by assessing whether proper search terms and coding are being used by both sides throughout the process.44

Of course, many jurisdictions appreciate that a truly efficient ecosystem is one in which technology can help prevent many cases from needing to reach the courts in the first place.45 Indeed, alternative dispute resolution—the process of settling disputes without litigation—has blossomed into online dispute resolution (ODR) through the use of algorithms to overcome the cost and limited availability of human mediators.46 When courts nevertheless become involved, some have also embraced ODR as an option for a wide range of processes at this stage,47 sometimes turning to private-sector-developed ODR resolution systems.48

All of these efforts have resulted in massive amounts of data, and some coordination across certain state courts is emerging in ways that are encouraging for eventual broader coordination on more complex interoperable legal AI. For example, the National Open Court Data Standards (NODS) initiative has been developed by the Conference of State Court Administrators and the National Center for State Courts (NCSC) in the form of “business and technical court data standards to support the creation, sharing and integration of court data by ensuring a clear understanding of what court data represent and how court data can be shared in a user-friendly format.”49 The primary goal of the initiative is to ease responses to data requests and improve the accuracy and utility of those data.50 The initiative currently serves only to collect and cleanse data, not analyze or interpret it, and “includes only data that courts collect for internal business purposes and that are potentially useful for non-court data requesters.”51

Calls have increased for better and more open data collection, management, and standards, particularly among state courts.52 While these data-focused issues are important, recent advancements in legal AI present additional challenges and opportunities and will require broader considerations and coordination. As AI on a broader scale continues to make inroads into other parts of life, courts still lag behind, and the public is likely to expect more from the courts in the years to come.53 One major challenge for courts is spearheading AI efforts on a local level. There are a number of national organizations for courts, judges, and court administrators,54 but as Cary Coglianese and Lavi M. Ben Dor have recognized, “there currently exists no centralized repository of applications of artificial intelligence by courts and administrative agencies,” and “[g]iven the federalist structure of the United States, the development and implementation of AI technology in the public sector is also not determined by any central institution.”55 As such, there is no leadership or centralized national coordination among courts in the United States on the implementation of court-oriented technologies, which can be a major impediment to the adoption of novel technologies.56 Furthermore, jurisdictional variation is staggering: “According to the National Center for State Courts, approximately 15,000 to 17,000 different state and municipal courts exist in the United States,”57 and “[a]ny one of these numerous judicial or administrative entities could in principle have its own policy with respect to electronic filing, digitization of documents, or the use of algorithms to support decision-making.”58

Interestingly, perhaps the most uniform use of technology across jurisdictions is also the most controversial: algorithm-driven criminal-risk assessment.59 These assessments “[use] risk factors to estimate the likelihood (i.e., probability) of an outcome occurring in a population,”60 such as committing a crime, failing to appear in court, violating parole, or engaging in substance abuse.61 As recently as 2021, some form of risk-assessment formula or aid in sentencing had been adopted in all but four states,62 with many states using either the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) or LSI-R (Level of Service Inventory-Revised) as their algorithmic tools.63 Critics of these tools have been vocal in the mainstream media,64 and this Essay does not argue that such efforts should be endorsed in moves toward interoperability.

D. The Limits of Progress Absent Interoperable Legal AI

Each of these fronts’ limitations are exacerbated by the variations of local rules, regulations, and approaches to legal AI in a world where technology neither waits nor recognizes borders. To the extent that progress on the self-help and legal-services fronts continues, their limitations are likely to be further magnified if courts are not prepared for an increase in cases resulting from the rise of legal AI. Kristen Sonday has observed that “[t]he impact of . . . pro se tools are profound because technology allows them to be scalable and replicable, serving more individuals than ever before.”65 Quinten Steenhuis, Clinical Fellow at Suffolk University Law School, has further noted that, although “[t]echnology [has] taken hundreds of hours of work . . . now we can reach thousands of people who otherwise couldn’t access the court.”66 But courts already struggle to keep up with caseloads, and they are likely to struggle even more with any kind of increase, absent changes of their own.67

Widespread court responsiveness and preparation will also be essential to solidifying the role of courts in broader efforts to ensure that legal AI promotes rather than inhibits access to justice. As Colleen F. Shanahan and Anna E. Carpenter have recognized, improving fairness and equality will require more than merely simplifying court procedures.68 In a previous work advocating for a “national legal regulatory sandbox” to test safely innovations for justice-gap impact and consumer protection, I identified challenges faced by local technology and regulatory reform efforts in light of economic and expertise constraints, as well as empirical challenges, that are more appropriately and effectively addressed at the national level.69 The risk of not overcoming these challenges is the further entrenchment of a two-tiered, wealth-based system of legal services, which could manifest itself in several ways. For example, if low-income individuals are relegated to technology-driven tools and services even when human-driven assistance would be more appropriate, it might be better than nothing, but still not as good as professional human services.70 But the opposite could also be true: legal technology could become incredibly powerful and effective, but not evenly distributed.71 Whereas large firms serving wealthy clients and corporations will have the means to integrate these technologies into their practice, small firms and solo practitioners may not have the resources, resilience, and relationships to do so.72 Moreover, some fear the AI-driven access-to-justice narrative is overhyped and will not significantly alter the status quo of today’s two-tiered system, where not everyone can access legal services.73 These two-tiered systems of inequality are not mutually exclusive.

Without large-scale coordinated efforts across all three fronts to level the playing field and facilitate necessary interdisciplinary and cross-industry collaborations, legal AI risks consolidating power, automating bias, and magnifying inequality. Overt bias has manifested in GPT-driven bots making racist statements due to their reliance on internet-based language, including from websites like Reddit that feature toxic discourse that is “scraped” to develop responses.74 But bias can also manifest itself more subtly. As Daniel N. Kluttz and Deirdre K. Mulligan have observed, “[P]redictive algorithmic systems embed many subjective judgments on the part of system designers—for example, judgments about training data, how to clean the data, how to weight different features, which algorithms to use, what information to emphasize or deemphasize, etc.”75 Without careful consideration during design, racist outputs can result.76 And bias is an especially serious concern when AI is implemented within the government.77

National efforts to increase access to justice, minimize the risk of bias, and ensure fair and accessible AI-driven legal tools and services will be far more likely to succeed if there is a foundation of more consistent—that is, interoperable—technology and processes across the legal problem-solving landscape. The remainder of this Essay argues that such interoperable legal AI should be at the forefront of priorities in this space in the coming years, and that such a priority is both worthwhile and practical.

II. interoperability as a key to maximizing ai’s access-to-justice potential

A. The Pillars of Interoperability in the Legal-AI Landscape

Interoperability is far from a new concept. It has been defined “in the broadest sense” as “the ability of people, organizations, and systems to interact and interconnect so as to efficiently and effectively exchange and use information.”78 Building off existing efforts more narrowly focused on modernizing filing procedures and standardizing the dissemination of certain court data in certain court systems, interoperable legal AI would have broader aims involving more stakeholders. In order for the U.S. court system to thrive as an “interoperable ecosystem” that facilitates AI development and widespread access to legal information, it must achieve five key pillars that have been widely associated with interoperability across different industries and settings, including government. Each of these pillars—technical interoperability, organizational interoperability, legal and public-policy interoperability, semantic interoperability, and socially informed interoperability—is discussed below.79

Technical Interoperability. At the heart of interoperability is “technical interoperability,” or “[t]he ability to operate software and exchange information in a heterogeneous network.”80 This can be achieved in a number of ways, such as by collaborating on product design to ensure compatibility or by otherwise setting technical standards across the ecosystem.81 In courts specifically, the widespread and consistent use of open-source software could increase transparency into the judicial system and facilitate cross-sector collaboration.82 As legal technology continues to advance, technical interoperability will need to expand beyond focusing on the underlying data to also include the more technical aspects of emerging AI.

Organizational Interoperability. For interoperability to flourish, there must be education, buy-in, and an alignment of goals across the ecosystem.83 A function of courts is information processing, from facilitating the initiation of a case to overseeing proper procedure to delivering an outcome.84 Each of these aspects could benefit from the streamlining that interoperability can facilitate. But the benefits of interoperability can also further broader legal-system goals concerning access to justice, including the obligation that stems from lawyers’ ethical obligations to combat the justice gap.85 Under the Preamble of the American Bar Association’s Model Rules of Professional Conduct, “[A]ll lawyers should devote professional time and resources and use civic influence to ensure equal access to our system of justice for all those who because of economic or social barriers cannot afford or secure adequate legal counsel.”86 Interoperability would also help lawyers meet their obligations under several specific rules invoking access to justice, including the duty to provide pro bono services87 and the requirement that fees be reasonable.88 If interoperability can maximize the widespread development and effectiveness of AI-driven tools for individuals and legal-service providers, efforts to increase the affordability of legal assistance will be greatly aided.

Legal and Public-Policy Interoperability. This pillar recognizes that interoperability efforts implicate laws and public policy and therefore sometimes require legal and regulatory changes.89 These issues have “arise[n] in the contexts of regulated industries . . . or in government enterprises, such as law enforcement, counter-terrorism, and intelligence.”90 The courts represent a government enterprise that is similarly large, complex, hierarchical, and geographically dispersed. Exploration of interoperability principles would be well situated within ongoing discussions surrounding legal-services regulatory reform,91 including the possibility of a national legal regulatory sandbox that would centralize expertise and other resources, reducing the burden on individual jurisdictions.92

Semantic Interoperability. Another fundamental aspect of interoperability is that all participants “speak the same language.”93 In other words, “the semantics and syntax of communication must be formalized in such a way that users know the appropriate inputs and the computing system recognizes meaning with few errors.”94 For courts, such interoperability would increase the volume of data available to efforts that require large datasets.95 The key to unlocking this potential is “data integration,” which “reconciles data from many data sources with different formats and semantics into meaningful records.”96 Obviously, this is challenging in a parallel federal-state court structure that spans fifty states and the federal court system, not to mention variations at the local level within each jurisdiction.

At a foundational level, an AI-friendly court system requires access to information about both the law and one’s individual case.97 Despite some progress, courts could do much more to improve access to such information. For example, AI on the self-help and legal-services fronts could more effectively make use of case law if cases were more uniformly “machine-processable,” which would require ensuring consistency in structure and certain terminology before publication.98

In addition, more uniform and centralized data could facilitate AI-driven insights into the fairness of the legal system, which would help ensure that access to the courts actually leads to justice.99 There is a recognized need for such information. For example, the interdisciplinary collaboration, Systematic Content Analysis of Litigation EventS Open Knowledge Network, was recently awarded a National Science Foundation grant to build a platform “to address the dearth of accessible information about who is prosecuted and convicted and what kinds of ultimate outcomes they experience,” overcoming the existing “lack of nationally-accessible and linked data available across the United States.”100 Such data, if effectively leveraged in AI-driven analysis, could be used to, among other things, detect bias in judicial decisions that might be difficult to detect absent the assistance of AI.101 Grant funds for such efforts, however, are unavailable in many circumstances, making the prospect of AI uniformity daunting,102 especially if jurisdictions are charged with spearheading such efforts on their own. But efforts to make court processes simpler as part of “semantic interoperability” is not antithetical to the way court systems work; indeed, “complexity reduction” is at the heart of court processes.103 Ensuring that courts are “speaking the same language” when it comes to AI integration would help them manage data and glean insights into how to improve court processes and produce just outcomes.

Socially Informed Interoperability. A final pillar recognizes that “[d]ifferences in cultural, religious, and intellectual perspectives and values, and political, social, economic, and strategic goals may shape how governments or communities approach the goal of achieving interoperability,” and that “[t]hese factors will have an influence on decisions about each facet of the interoperability ecosystem and whether or how a society or government will consider the broader interoperability ecosystem.”104 In the context of courts, these considerations will range from accounting for bias,105 to the prospect of automated decision-making in criminal risk assessments106 or other areas, to the ability of innovators to design access-to-justice-oriented tools that are compatible in key ways with courts across the country.

B. A Comparative Case Study: Brazil’s Interoperable-Legal-AI Efforts

National legal AI interoperability in the United States would not have to operate on a blank slate. Current discourse on the narrower role of data in the legal services landscape could expand to encompass interoperable AI by looking beyond our borders. For example, leaders could learn from efforts undertaken in Brazil, led by the Brazilian National Council of Justice, which oversees the largest judicial system in the world.107 In 2020, the Brazilian judicial system had a backlog of seventy-eight million lawsuits and what a report to the National Council of Justice called “substantial challenges in case flow management and a lack of resources to meet this demand,” which required “[d]rastic solutions.”108 A first wave of efforts to leverage AI in the Brazilian courts resulted in “a seemingly uncoordinated algorithmic universe in the judicial system,”109 leaving the country lacking “a clear policy direction for the use of AI in the judicial branch and clear mandated policy principles to ensure that AI is used ethically and safely.”110 Researchers observed that “[c]ourts [were] not communicating with the [National Council of Justice] or other courts regarding the development of their own tools,”111 despite AI having been used by courts for everything “from classifying lawsuits, to preventing servers from completing repetitive tasks, to even providing recommendations for a court ruling.”112

In response to these challenges, an academic group from Columbia University partnered with a Brazilian nonprofit research institute to “design a collaborative governance structure to strategically integrate all AI initiatives in the Brazilian judiciary.”113 The project’s three objectives were (1) to assess the different AI tools already developed in the judiciary to create a model for integration and standardization, (2) to design a collaborative governance structure, and (3) to create a proposal for aligning the management model with international best practices.114 The project’s report—the “Brazil Report”—also called for implementing and supporting open-source software in the courts, facilitating opportunities for AI court experts to communicate, creating incentives for courts to join the interoperable system, and partnering with universities and the private sector in the development of tools.115

To be sure, there are differences between the U.S. and Brazilian court systems, and what works for one country in regulating in light of legal technology will not necessarily work for another.116 Even so, Brazil’s efforts can serve as a helpful reference point in exploring opportunities for interoperable legal AI, as opposed to just data, in the U.S. court system,117 an analysis that has not yet been explored in the legal literature on AI and access to justice. Significantly, the Brazil Report outlined a process for implementing an official uniform electronic system that converts, digitalizes, and authenticates documents across the court system.118 The Electronic Judicial Process (PJe) was developed through a partnership between the country’s National Council of Justice and various courts.119 The Brazil Report notes that even courts that preferred their legacy electronic systems agreed to transition to the official uniform system in light of the benefits.120 By 2023, nearly all criminal, civil, and administrative judicial cases in Brazil were managed digitally, with only about 1.1% still paper-based, thanks to the deployment of PJe,121 ultimately allowing courts to leverage AI systems using the digitized data.122

This ecosystem stands in stark contrast to the court system’s fragmented past. A more recent report by Brazil’s National Council of Justice observed that “[p]rior to the establishment of the PJe as the national standard, individual courts developed their own procedural systems,” which “evolved into a complex landscape of derivative systems with local variations.”123 It further noted that “[t]hese inconsistencies led to a situation where the PJe implemented in different courts diverged from the national version, hindering communication and data exchange between them.”124

The Brazil Report offers a useful illustration of how many of the pillars of interoperability translate to the court system and the rise of AI. In modeling its National Interoperability Model on the European Union’s Interoperability Framework, Brazil focuses on “technical interoperability, syntax (formatting and processing data), and semantics (network architecture).”125 From a technical standpoint, the Brazil plan envisioned embracing a common “factory for AI models” that allows “courts that do not have in-house technology teams to scale algorithms for their operations,” thereby facilitating “an open platform for AI development.”126 From an organizational standpoint, the Brazil Report also called for a national “Laboratory for Innovation in the Electronic Judicial Process,” which would assemble national datasets to train tools, centralize AI expertise, and facilitate the sharing of information, including AI models and algorithms.127 It also recognized the need for a centralized organization “to guide and manage the integration,” including by “creating a roadmap to integrate the AI tools, obtaining commitment from multiple organizations to integrate the AI tools, regular monitoring and evaluation of the integration, providing technical support for the integration, and frequent communication with the organizations.”128

Brazil’s efforts demonstrate how interoperability best practices can apply to expansive court systems to facilitate AI and related digitization efforts, establish necessary governance structures, and ultimately improve court processes, transparency, and outcomes. The interoperability envisioned in the Brazil Report is also reflected in broader National Council of Justice initiatives like its “Justice 4.0 Program,” which “serves as a catalyst for digital transformation within the Brazilian Judiciary” and “aims to guarantee more agile and effective services, ultimately simplifying access to justice for all.”129 The program includes, for example, the Digital Platform of the Judiciary, “a public policy that unifies the management of electronic judicial proceedings across all courts in Brazil, ensuring compatibility between different procedural systems,” as well as the Judiciary’s Single Service Portal, which strives to “allow[] users to access services from any court nationwide within a seamless environment.”130

But Brazil is far from the only international source that could help guide interoperability in the courts. The United States could also look to the European Interoperability Framework, on which Brazil’s model was based, which outlines guidance for interoperable digital public services for European Union member countries.131 The Framework’s Implementation Strategy reflects the complexity of achieving interoperability across multiple jurisdictions, including by describing the importance of identifying the processes by which information crosses borders, as well as developing guidelines for how to better align and simplify those processes.132 The Framework also underscores the need to “[e]ngage stakeholders and raise awareness on interoperability” and “[d]esign and perform communication campaigns promoting the importance of interoperability and benefits from applying the [Framework],”133 an important part of securing the buy-in that Brazil achieved despite initial hesitance from some courts.134 The United States could expect similar hesitance, as the NCSC has noted with its voluntary NODS initiative that U.S. state courts “may decide not to comply for many different reasons,” including that compliance “would disrupt or replace an existing build data process.”135 The European Interoperability Framework further emphasizes that better coordination through interoperability can help “guide the design and development of public services based on users’ needs,”136 which could be especially beneficial for courts when it comes to assessing and meeting the access-to-justice needs of members of the public.

III. mutual benefits for interoperable legal ai and broader ai-governance efforts

Interoperable legal AI would support and be supported by broader emerging efforts in AI governance. For example, courts can look to President Biden’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which identifies principles for the executive branch to follow in its implementation of AI.137 The Order underscores the importance of “robust, reliable, repeatable, and standardized evaluations of AI systems,” as well as ensuring that AI policies are consistent with the administration’s “dedication to advancing equity and civil rights.”138 National interoperable legal AI would also help further the Order’s vision for the United States to “lead the way to global societal, economic, and technological progress, as [it] has in previous eras of disruptive innovation and change,” which requires “pioneering [AI] systems and safeguards needed to deploy technology responsibly—and building and promoting those safeguards with the rest of the world.”139 Such international dialogues have the power to “unlock AI’s potential for good, and promote common approaches to shared challenges.”140 National coordination of interoperable legal AI would be much better suited to facilitate such a dialogue than the current status quo of wide jurisdictional variation and siloing. With regard to AI in the criminal context specifically, a more national approach to legal AI would also help further the Order’s commitment to “[e]nsur[ing] fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.”141

In addition, interoperability would ensure that courts are able to follow emerging ethical AI principles. As outlined in the Brazil Report, such principles include respect for fundamental rights, equal treatment, data security, transparency, and AI under user control,142 also sometimes referred to as “human in the loop,” where “an individual . . . is involved in a single, particular decision made in conjunction with an algorithm”143 and “has the ability to intervene” when needed.144

Moreover, interoperable legal AI would complement a larger shift toward national coordination of legal technology to encourage more standardization of relevant rules, regulations, and design principles. In particular, the prospect of interoperable legal AI highlights the potential for national coordination to overcome the challenges faced by local efforts in light of economic and expertise constraints, as well as empirical challenges stemming from a lack of helpful data at the local level.145

From an economic perspective, it is no secret that courts have faced budgetary challenges for decades,146 and it is not surprising that investments in technology are often nonstarters.147 But investing in new technology would be much less daunting if a jurisdiction could adopt a preexisting national AI framework, as opposed to starting from scratch. Similarly, as issues arise in the design, implementation, and execution of interoperability, centralization of interdisciplinary expertise to guide and respond to inevitable challenges would be a tremendous asset. If jurisdictions continue to “go it alone,” both the volume and the variety of issues will be much higher, stretching experts thin. Moreover, evaluation of data is critical to AI development. With interoperability, more courts could do the same thing with the same technology, improving both the quantity and quality of data collection and evaluation. And when guidance is developed, it will be more widely applicable.

Conclusion

This Essay calls for interoperability to play a more prominent role in efforts to leverage AI to help close the justice gap. It further argues that the courts must be the drivers of such efforts. While it is beyond the scope of this Essay to present a comprehensive roadmap for such an undertaking in the United States, it is worth noting that interoperability can start small. In addition to the NODS initiative, which is more narrowly focused on data standards, some jurisdictions implementing or exploring regulatory sandboxes for technology-driven legal services have contemplated using compatible data-collection methods to facilitate data sharing.148 Similarly, partnerships between early innovators in court AI could serve as a model for jurisdictions that might then be more inclined to join forces. Eventually, a national entity could more realistically standardize such efforts on a larger scale, similar to the centralization envisioned in Brazil’s plan. Of course, other issues will warrant attention along the way, including the intersection of interoperability and intellectual property. Above all, this Essay aims to underscore the urgency of elevating AI interoperability within discussions on court data, legal technology, regulatory reform, and access to justice, with the goal of developing a more integrated and unified approach to building a fairer legal system.

Assistant Professor and Clute-Holleran Scholar in Corporate Law, Gonzaga University School of Law. The author thanks Marlee Carpenter for her valuable research assistance.