The Age of Consent
On three October afternoons in the fall of 1974, Grant Gilmore, a Sterling Professor of Law at Yale, delivered his Storrs Lectures, the lecture series at Yale Law School whose speakers had included Roscoe Pound, Lon Fuller, and Benjamin Cardozo. Gilmore was a magisterial scholar: the author of a prize-winning treatise, Security Interests in Personal Property, and what remains the leading treatise on admiralty law; he was the Chief Reporter and draftsman for Article 9 of the Uniform Commercial Code; and his PhD on French poet and critic Stéphane Mallarmé had led to an appointment at Yale College before he moved on to study law.
Professor Gilmore’s Storrs Lectures were titled “The Ages of American Law.” Following Karl Llewellyn, Gilmore divided American legal history into three distinct eras. The Age of Discovery roughly spanned the years from the mid-eighteenth century through the Civil War, during which the United States grandly constructed a new legal edifice upon the foundations of the English common law. The Age of Faith lasted from the Civil War through World War I, and was notable for the Olympian status it accorded law and its demigods, including Christopher Columbus Langdell and Oliver Wendell Holmes, Jr. Langdell believed that law was a science from which scientific truths could be derived, and even the skeptical Holmes, according to Gilmore, refined and judicialized Langdellianism.
After that came the Age of Anxiety, Gilmore’s own era, an Age when legal realism gnawed through the core assumptions of the Age of Faith and the nation groped unsuccessfully for new creeds to replace them. Gilmore’s lectures satisfied and mesmerized their audience, and they were soon fashioned into a book, also titled The Ages of American Law, which became a foundational text for introducing law students to American legal studies.
In attendance at the 1974 Storrs Lectures was Philip Bobbitt, at the time Professor Gilmore’s student and advisee, now a professor at Columbia Law School. As an Article Editor for the Yale Law Journal, Bobbitt wrote an Introduction to the first of Gilmore’s lectures when the Journal published it later that year. In the Feature that follows, which will appear in adapted form as the final chapter in a new edition of The Ages of American Law published by the Yale University Press, Professor Bobbitt refracts American legal history since the 1974 Storrs Lectures through the lens of Gilmore’s lectures.
I.
So what happened next? Did the society of which Gilmore wrote in the 1970s become more—or less—just, an assessment Gilmore claimed we could make by examining its laws?1 There are encouraging signs that it did become more just, such as the broadening of access to health care by federal statute,2 and the Supreme Court’s declaration that the Defense of Marriage Act, which blatantly marginalized homosexual unions, was unconstitutional.3 Or was there less justice, as a profusion of laws and regulations, like those of the federal tax code, were maniacally propagated, creating a jungle within which only the best-financed corporate predators could thrive?
I suppose the answer must be, as is so often the case with America, that all of these contradictory characterizations are true. We contain multitudes; we contradict ourselves. Law does reflect the moral worth of a society and thus it is, at any time, a mass of conflicting moral claims and entitlements. But Gilmore overstated matters, as he knew, when he asserted that the law “in no sense determines the moral worth of a society.”4 Because law guides and channels our moral intuitions—determining at what moments our consciences are engaged to resolve which questions—such assessments are necessarily dynamic and subject to constant change. It is this interaction between the static, studio portraits of a society as reflected in its laws, and the cinematic unribboning of law as it challenges, evolves, and shapes the very consciences that observe its development and on which it depends, that makes the moral evaluations of American society so complex, elusive, so legal in character.
Gilmore’s conclusion was a paraphrase of Holmes, and it was to a biography of the great jurist and American superhero that Gilmore devoted his last years. The Harvard historian Mark De Wolfe Howe had begun the project, authorized by the Holmes Trust, but he had died having finished only the first forty years of Holmes’s long life, before, that is, Holmes went on the Massachusetts bench and long before he was appointed to the U.S. Supreme Court at sixty-one.5 Gilmore was not an unusual choice to succeed Howe. Though Holmes was known to the public as a great constitutional dissenter, his theories of contract6 had brought him early fame. Moreover, Gilmore was a thorough New Englander and a prominent second-generation Legal Realist; perhaps the trustees thought his reticent and fastidious irony would render Holmes as compelling to future generations as he had been to the early Realists. Gilmore shared with Holmes a rigorous skepticism about reform movements, partisan programs, and political ideologies, indeed of systems of any kind. What he lacked was Holmes’s willingness to let the chips fall where they may, and it was this failure of detachment, a quality so essential for a Nietzchean figure of Holmes’s martial temperament, that led to a paralyzing estrangement between the biographer and his subject. Gilmore died fifteen years after receiving the commission and submitted no manuscript.7
II.
Gilmore’s rueful writer’s block reflected the conundrum into which Holmes and the Realists had led American law. Legal Realism posed this challenge: If law was simply what the judges did, then how could they ever be—from a legal point of view—wrong? And if law was simply whatever the judges did—and they often contradicted and reversed each other and themselves—how could they ever be right? This unavoidably cast some doubt on the legitimacy of the judicial process.
This doubt particularly plagued constitutional law. It was one thing to say that great commercial and financial interests had influenced the drafting of the Uniform Commercial Code—that would hardly be surprising—or that the plaintiffs’ bar had marshaled its political resources to effect ever broader statutory catchments for liability; that, too, was to be expected. But when the legitimacy of constitutional law was called into question, explosive charges were inserted beneath the very foundation of the rule of law: the idea that the state was constrained by law. Most acutely, the American practice of judicial review was called into question, for if there was no reason to believe that the judges had a legal basis for their decisions, then why should we not defer to the Congress and the state legislatures or the Executive, who could at least claim the political endorsement of the electorate?8 If judges could never be wrong, then law itself was indeterminate—there was a correct argument for any conclusion—and the only explanation for the different results that judges reached had to lie outside the law in politics, ideology, personality, bias, and countless other factors, none of which provided, and many of which forfeited, the legitimacy of legal decisionmaking.
Gilmore’s contemporaries working in constitutional law struggled, often heroically, with this problem. At the Harvard and Columbia Law Schools, Henry Hart and Herbert Wechsler proposed an answer. It wasn’t what the judges decided but how they arrived at and applied their decisions that mattered. Judicial rule-applying must be a reasoned process of deriving rules from general principles of law—regardless of the substantive content of those principles—and following those rules resolutely in resolving actual controversies between adverse parties without regard to their status or to any fact not explicitly made relevant by the rule itself.9
On the U.S. Supreme Court, Justice Hugo Black proposed a different answer: not the legal process, as Hart and Wechsler’s approach came to be known, but the plain words of the constitutional text provided the bases for judicial decisions.10 The constitution’s majestic absolutes—“Congress shall make no law . . . abridging the freedom of speech or of the press”; “Nor shall any state deprive any person of life, liberty or property without due process or law”—supervened and cordoned off vast areas of judicial decisionmaking where politics and personality were forbidden to trespass. These provisions were to be applied according to the common understanding of the words to our contemporary publics, and not reconceived by doctrine or recondite, legalistic constructions. “No” meant “No.”
At the Yale Law School, Charles Black—Gilmore’s colleague and friend, the best man at his wedding—proposed yet another route out of the wilderness. Courts, Professor Black wrote, should look to the political structures ordained by the Constitution. American constitutional law could not be confined to constructions based on the history and text of the Constitution alone because many of its most important commitments lay in the relationships implicit among these structures. The democratic process, which authorized judicial oversight, and not the legal process isolated in an apolitical vacuum, legitimated legal rulemaking, for example. This could be inferred from the relationship between Article I and Article III of the Constitution whereby Congress established the federal court system, endowed it with jurisdiction, and expected it to apply the statutes the Congress had passed, subject only to the constitutional restraints to which the Congress itself was subject.11
Gilmore himself was intrigued by an approach proffered by the eccentric but hugely forceful Chicago Law School professor, William Crosskey, who gave a new, post-Realist twist to the originalist position—the position that constitutional interpretation is a matter of recovering the original intentions of the ratifiers of the text to be construed. Courts, Crosskey argued, should determine such intentions by examining the language of the society from which those ratifiers came.12 Teasing out meaning from history had often been criticized by Realists as leading to labyrinths of indeterminacy, but Crosskey claimed we could avoid such mazes by taking words and phrases on their own historical terms and building up meaning to arrive at original intentions rather than the other way around,13 as originalists had customarily done.
Alexander Bickel, a colleague of Gilmore’s and Black’s at Yale, pressed yet another alternative. Extending an approach with origins in the jurisprudence of Justices Louis Brandeis and Felix Frankfurter, Bickel argued that the practical consequences for the institutions of the law should guide judges in deciding how (or even whether) to apply the provisions and precedents of the Constitution.14 As with the other second-generation Realist approaches, Bickel’s sought a calculus long ratified by common law—in his case, a comparison of the costs and benefits of a proposed rule—and tried to connect it to a fixed position mandated by the Constitution, the institutional position of the judiciary, thereby limiting the discretion of judges and protecting their stature.
Finally, an outsider—if a philosophy professor educated at Princeton and teaching at Harvard can be deemed so—claimed that legitimacy for the rules of government could be established by applying a simple test. What rule, John Rawls asked, would we all agree to in the absence of any knowledge about its impact on ourselves?15 Such a rule derives from the guiding ethos of any society whose laws are indifferent to the political, social, and economic interests of those who wield power—even the power of a majority of the electorate. Law professors—most influentially Ronald Dworkin—as well as judges and advocates, some who hadn’t read the philosopher or perhaps did not even know his name, adopted this approach or others derived from it,16 in the hope of finding that moral principle, that saving, generative ethical theory, that would allow them to decline the wormwood chalice proffered by Legal Realism.
Each approach enjoyed a temporary preeminence—even Crosskey’s unusual historicism, which has recently experienced a renaissance17—but ultimately no one approach was wholly able to succeed because none was able to capture the unreflective consensus enjoyed by Formalism in its Age of Faith. New alignments formed, composed of the various approaches that had failed to achieve a stable hegemony: “strict construction”—composed of historical, textual and structural elements—vied with a congeries of allegedly more latitudinarian forms—doctrinal, prudential, and ethical methods of interpretation—that its opponents ingenuously decried as “judicial activism.” But this simplifying, contrapuntal division made the problem posed by Legal Realism harder, because there was no legal reason to prefer one set of approaches to another beyond the claim that each made that it alone was lawful, on its terms. Gilmore’s Age of Anxiety had become an age of uncertainty, of ambiguity, of incompleteness. Despite Llewellyn’s hopes for a renewal of the Grand Style of judging, instead we witnessed a new and barbaric Formal Style, as Gilmore bitterly foresaw. Indeed the whole history of American law might have been summed up in Zbigniew Herbert’s short poem From Mythology:
First there was a god of night and tempest, a black idol without eyes, before whom they leaped, naked and smeared with blood. Later on, in the times of the republic, there were many gods with wives, children, creaking beds, and harmlessly exploding thunderbolts. At the end only superstitious neurotics carried in their pockets little statues of salt, representing the god of irony. There was no greater god at that time.
Then came the barbarians. They too valued highly the little god of irony. They would crush it under their heels and add it to their dishes.18
The attitude of the vandals was simply put by a constitutional lawyer, Martin Garbus, who wrote in the New York Times that
law is just politics by a different name, and . . . most Supreme Court justices are result-oriented and choose legal theories (originalism, judicial activism and the like) as window dressing while they get where they want to go.
Although these illusory labels can be treated as serious methodologies and may be of interest to law professors . . . the American legal system [is] . . . just another part of the government, neither higher nor lower than the other two branches, and one that must be muscled.19
This is a crude but powerful prudentialism. Unlike the prudentialism of Brandeis, Frankfurter, and Bickel, it is not concerned with protecting and preserving the institutions of governing—they are all the same anyway, on this view. These radical prudentialists—Gilmore disparagingly called them the New Conceptualists20—had forgotten, if they ever knew it, the insight that, “[t]o realize the relative validity of one’s convictions and yet stand for them unflinchingly, is what distinguishes a civilized man . . . .”21 Instead, they hungered for certainty. Some, not finding it after the exposés of Legal Realism, marched into a politicized realm of cynicism, clothed in an idealistic truculence; others, also not finding the certainty of science, decided to invent it. Two movements, Critical Legal Studies and Law and Economics, arose, the illegitimate children of Legal Realism. They did not wish to slay their father so much as inherit his mantle. These movements were neither uncultured nor unsophisticated. On the contrary, their leaders were among the most cultivated and widely read of the legal professoriate, though they could often be uncivil and in their need for a reductionist certainty could appear to be bullying and naïve.
III.
Law and Economics at first appeared to be no more than one more iteration—if the most powerful—of the “Law and ___” phenomenon that arose as a consequence of Legal Realism’s claim that law was a social science if not quite the physical science envisioned by Langdell. Some law professors, like Eugene Rostow and Guido Calabresi, also had advanced training in economics. Soon, anthropologists, sociologists, behavioral psychologists, even psychiatrists, many without law degrees, began to appear on law faculties. Calabresi’s The Costs of Accidents22 quickly became a classic text, and excerpts appeared in the leading torts casebooks,23 but Calabresi was clearly a law professor with formidable economic skills, not a neo-classical economist. As he himself wrote,
The classical economist will show ad nauseam that those who were made better off by moving to a free market choice system based on full costs could more than compensate those who were made worse off. The problem is that such hypothetical compensation rarely comes about. It may be too expensive; it may be made feasible only through the levying of taxes that misallocate resources grievously; or it may be politically impossible to accomplish. In all such cases, the theoretical desirability of the totally free market approach has little significance in practice.24
But the Law and Economics movement was not simply the “Law and Alfred Marshall Show.” For one thing, it was a movement. Financed by corporations and foundations, it sponsored a series of annual workshops for law professors—Arthur Leff ridiculed these as “Pareto-in-the-Pines”25—to educate them in the new skills of microeconomics and indoctrinate them in the manner of political summer camps. At its core, Law and Economics relied on two controversial assumptions about the world.
First, the efficient markets hypothesis holds that markets provide and asset prices fully reflect all relevant information and thus provide accurate signals for the allocation of resources.26 This idea is an inference from the second assumption, the rational expectations hypothesis, which states a postulate about the conduct of individual economic actors.27 It holds that these actors form and update their judgments in response to available information in an optimal way (rather than, say, supposing that the future will resemble the past, as in adaptive expectations). It is consistent with rational expectations that outcomes depart unpredictably from expectations, as even a person acting rationally need not have perfect or complete information. But it is not consistent with the postulate for outcomes to depart predictably from expectations, as that departure would make the expectations not rational. Both of these hypotheses depend upon a more general assumption that people behave in ways that maximize those outcomes considered by them to be most desirable, and that they know how to do this by acting rationally.28
These hypotheses promised to serve as the basis for a radical reductionism when applied to law. Legal rules—such as those that govern liability for breach of contract or the commission of a tort, rules that determine property rights or responsibility for crimes and the sanctions we enforce against criminals—could be evaluated and so adjusted that the persons subject to those rules would produce, as efficiently as possible, the outcomes desired by society. All that was lacking was a principle, which would overcome the political objections to exalting the efficiency of public policy over other values by explaining that the distributive consequences of such an approach were negligible, and a clever rhetorician who would show not only how this was done but, in the spirit of Holmes’s The Common Law, that it had always been the implicit logic of common law judges, even if they were unaware of it at the time. Thus the cul-de-sac into which Legal Realism had led American law could be deftly redesigned as a happy, if confined, roundabout. If law was nothing more than what judges did, it “turned out” (a favorite, cloying phrase of social scientists) that what they did was microeconomics.
In 1906, the Italian economist Vilfredo Pareto provided one half of the needed principle when he proposed to a society deeply riven by partisan and social conflict this modest intersection of interests: surely, he said, all would agree with a policy that made at least some people better off and made no one worse off?29 The other half was given by the Chicago School economist Ronald Coase whose famous “theorem” proved that when transaction costs were not a factor—when, among other things, information was equally and cheaply available to all market agents—liability rules did not influence the efficiency of the ultimate allocation of resources.30 Whether the legal rule made the rancher liable when his cattle trampled the crops of his neighbor or left the farmer to suffer without redress, the outcome was the same from society’s point of view: the two parties would bargain, arriving at the most efficient outcome—a fence for example, whose cost was measured against the cost of the ruined crops and the profits of uninhibited grazing. The party who paid might be different but the total cost was the same whether fruitlessly suffered or fully compensated, and neither party was worse off than he would otherwise have been.31 Taking these insights of Pareto and Coase together yielded this conclusion: a perfectly competitive market would result in distributions of wealth from which no one could be made better off without someone being made worse off. Any redistributive action—indeed any action at all to shift losses—was bound to make the market less efficient, regardless of its claims of justice.
Thus in the aftermath of the bitterly ideological conflicts of the twentieth century, an apparently objective method had been arrived at that eerily recapitulated Holmes’s prescription:
[W]hen we are dealing with that part of the law which aims more directly than any other at establishing standards of conduct, we should expect there more than elsewhere to find that the tests of liability are external, and independent of the degree of evil in the particular person’s motives or intentions. . . . They assume that every man is as able as every other to behave as they command. If they fall on any one class harder than on another, it is on the weakest. For it is precisely to those who are most likely to err by temperament, ignorance, or folly, that the threats of the law are the most dangerous.32
This “objective” method must have seemed right to many people in the late twentieth century after the terrible wars to determine whether communism, fascism, or parliamentarianism would be the legitimate constitutional order of the industrial nation-state,33 conflicts that by some estimates had cost ninety million lives,34 just as Holmes’s formulation must have seemed correct after the Civil War—in which both sides appealed to God—had cost the lives of more than 750,000 American soldiers alone.35 After such suffering, the desire for a consensus independent of ideology became itself an intense, ideological objective. Now the Law and Economics movement lacked only a virtuoso, and it found him in Richard Posner.
Richard Posner entered Yale at sixteen and left four years later with an English degree, summa cum laude. He had a dazzling career at the Harvard Law School—first in his class, president of the Harvard Law Review—before clerking for Justice William Brennan, at the time the most liberal member of the Supreme Court, and then working as an assistant to Solicitor General Thurgood Marshall.36 A red-diaper baby from Manhattan, he might have been expected to join the lists of leftist law professors at the great American schools.
Something happened, perhaps not so different in kind from the street violence that radicalized the German jurist and fascist Carl Schmitt in the 1920s. The turmoil on American campuses, from which some schools like Columbia and Berkeley have never quite recovered, seems to have led Posner to question the liberal—and liberal/legal—notion of reasoned consensus.
“Politics is about enmity,” he once said in words that could have been written by Schmitt. “It’s about getting together with your friends and knocking off your enemies. The basic fallacy of liberalism is the idea that if we can get together with reasonable people we can agree on everything. But you can’t agree: strife is ineradicable, a fundamental part of nature, in storms and in human relations.”37
But that didn’t mean law was politics. Indeed it pushed Posner the other way, in a search for a point beyond enmity and sectarianism. This he found in Pareto and Coase. After a year at Stanford, Posner moved to the Chicago Law School, where he grew to embody its famous empiricist model to such a degree that now the school resembles him. His most influential work, The Economic Analysis of the Law, is now in its ninth edition.38 Most prominently in the first edition, however, it is composed of a series of marvelous sleights of hand, reminiscent of the mathematical transformations by which identities are proved in trigonometry, in which each branch of the law is resolved into a species of microeconomics. These transformations resemble the just-so stories of sociobiology and neuroscience, and other reductive centrifugal methods by which all the elements not germane to the particular qualitative sediment sought are spun away, an art of which Posner is a master and of whose exaggerations and distortions he is quite aware. Indeed partly by overstating his case he became, as of 2000, the most-cited legal academic in the United States.39
Posner has many gifts, including a lucid pen and a refreshing hostility to cant, and these two are allied with perhaps his most controversial trait, a Nietzschean detachment that doesn’t “make allowances,”40 a quality of anti-sentimentality he shares with Holmes.41
Gilmore had identified, early on, the Holmesian legacy in the Law and Economics movement: “Holmes’ strict definition of boundaries of liability, stress on the introduction of scientific and economic considerations to legal questions, and lack of social welfare consciousness have induced economists, and lawyer-economists, at the University of Chicago to claim Holmes as one of their own.”42
But Gilmore also saw something more. When he gave his Storrs Lectures, his audience was shocked at the portrait of Holmes as the Mephistopheles to Langdell’s befuddled Faust. Wasn’t Langdell’s attempt to found a “science” of law just the sort of naïve law-quarrying that Holmes and the Legal Realists had ridiculed? Wasn’t it Langdell’s illusions that Holmes, Brandeis, and Cardozo, as well as Corbin and Llewellyn, had sought to dispel? Yes, but not only that. Just as Voltaire and the philosophes had accepted the basic tenets of the ideology they professed to despise—and just as the Roman Catholic Church had deftly moved to assume the scope and power of the Roman Empire it superseded43—Gilmore saw a “community of interest” between the Realists and the Langdellians.44
Indeed, he went further. Though few appreciated it at the time, Gilmore not only saw Posner as Holmes’s heir, but quite shockingly saw the Law and Economics movement as a repackaging of Langdellianism. In almost the last page of The Ages of American Law,45 he quoted Posner’s inaugural announcement of the Journal of Legal Studies, the house organ of the law and economics movement, with its uncanny repetition of Langdell’s own goals, and even his metaphors:
The aim of the Journal is to encourage the application of scientific methods to the study of the legal system. As biology is to living organisms, astronomy to the stars, or economics to the price system, so should legal studies be to the legal system: an endeavor to make precise, objective, and systematic observations of how the legal system operates in fact and to discover and explain the recurrent patterns in the observations—the “laws” of the system.46
Llewellyn had hoped that the emergence of the Formal Style before World War I was simply an aberration and that the Grand Style would reemerge triumphant.47 Like Llewellyn, Gilmore saw in Corbin’s pragmatic treatment of contracts48 and Cardozo’s seductive case-lawyering49 evidence that a more pluralistic, less formalistic style was emerging once again in American law.50 Indeed Llewellyn’s and Gilmore’s efforts with the Uniform Commercial Code’s open drafting style and its vague rules followed by extensive exemplary notes seemed to confirm this trend. Constitutional theorists like Bruce Ackerman claimed to find in the New Deal reversals of Formal Style opinions a “constitutional moment” of such consequence that it paralleled the adoption of the Civil War amendments that announced the Age of Faith and the founding cases of the Republic that marked the Age of Discovery.51
Alas, reports of the death of Formalism were exaggerated, as the Law and Economics movement demonstrated. Moreover, a simple indifference to craft, notoriously in Roe v. Wade52 but no less in evidence in the jurisprudence of less controversial cases—whether striking at executive authority as in United States v. Nixon53or Clinton v. Jones,54 or legislative discretion as in Reynolds v. Sims,55or affirming congressional power as in Garcia v. San Antonio Metropolitan Transit Authority,56 whether enforcing rights as in New York Times Co. v. Sullivan,57 or trampling on them as in Bush v. Gore58—such indifference is not sufficient to merit the accolades of a “Grand Style” even if it is heedless of the rigors of a Formal Style. Perhaps a lack of style fitted the age. Perhaps it was an age of carefree vandals who smashed up things and then retreated back into their vast carelessness and let others clean up the mess they had made. That suggestion leads us to the other, post-Gilmore movement that, like Law and Economics, sought to build on the wreckage left by Legal Realism.
IV.
The movement that came to be known as Critical Legal Studies (CLS) was obviously not going to be impressed with argumentative rigor by judges whom it referred to as “toadying jurists.”59 Far from seeking a way out of Legal Realism, CLS embraced its critique of legal reasoning with a passionate intensity. The UCC that Llewellyn and Gilmore had crafted was too pluralistic, too craft-oriented. If the Law and Economics movement tried to restore an objective, universal calculus out of fear of the unknown, then CLS exploited this fear almost to sadistic depths, claiming that the lack of such a calculus meant that all was potentially permitted; what actually eventuated was the replication of oppressive hierarchies.
Two principles united the CLS movement: (1) traditional legal doctrines were incoherent, precisely because they were pluralistic; for every rule there was an opposite, equally plausible formulation and thus the system of rules was infinitely manipulable, indeterminate, and subjective; and (2) the system existed in this mystifying form in order to sustain a legal order that was the basis for corporate capitalism, distracting reformers and dictating who gets how much in society while legitimating an oppressive social order. Thus American law, which claimed to be to some degree autonomous from politics, was really only an extension of politics by other means.60
Although the CLS movement claimed continuity with the civil rights movement, this genealogy did not quite wash. The historic triumphs of the civil rights struggle were the laws they spawned, the Civil Rights Act of 196461 and the Voting Rights Act of 1965,62 and the numerous and courageous decisions of the Fifth Circuit judges who fearlessly interpreted Supreme Court precedents to destroy de jure segregation.63 The true paternity of CLS can be found in the anti-war protests where demonstrators had circumvented the ordinary processes of representation and elections, shouting down speakers, closing classrooms and attempting to make the society ungovernable. That movement had not so much ended the war as forced the United States to abandon it; it was a heady experience that quite a few protestors were loath to leave behind. It created a generation infused with the confidence that the society looked to them for change, and that they, rather than the elected and appointed leaders ostensibly in charge, knew how to deliver that change. In the universities, perhaps especially in the law schools, they looked at their older colleagues—men who had supported the war, often in melancholy resignation—and did not want to be like them.
According to its principal theorist, Roberto Unger, Critical Legal Studies was composed of three principal perspectives: a claim of radical indeterminacy64 that fed a deconstructionist critique, exposing the role of the status quo embedded in the assumptions of the American legal process; a functionalist, neo-Marxist position that appealed to the conventional left; and what Unger called a “micro-institutionalist” program that asserted that the alternatives to American practices had to be recovered from a canvass of the “institutional variations in present and past law” because such alternatives had, at the level of traditional ideological abstraction (e.g., socialism versus capitalism), evaporated.65 This may have been news to Unger’s companions at the outset of the movement in the 1970s,66 but by the time of Unger’s own Storrs Lectures in 199467 the appeal of Karl Marx had considerably waned.
In the meantime, from roughly 1977 and the founding CLS Conference at Ann Arbor to the disillusionment with which it is today generally regarded,68 CLS ridiculed, insulted, and assaulted the liberal establishment that had overwhelmingly dominated the elite law schools. Duncan Kennedy, the charismatic face of CLS for most Harvard Law students, had grown up in Cambridge and had known Mark Howe, Louis Jaffe, and Ben Kaplan—all senior members of the Harvard Law School faculty who were widely revered and stood, like Gilmore, for a particular kind of post-Legal Realism that was “skeptical of any attempt at grand theory of either a descriptive or a normative kind.”69 They reminded Kennedy of the pre-Civil War Northern Democrats of whom Henry James wrote,
Such was the bewildered sensation of that earlier and simpler generation . . . that . . . their illusions were rudely dispelled, and they saw the best of all possible republics given over to fratricidal carnage. This affair had no place in their scheme, and nothing was left for them but to hang their heads and close their eyes.70
The Crits had captured something altogether true about the dominant post-Realist Law School: its members struggled to justify themselves when confronted by the very heirs whose patrimony they had attempted to preserve. Gilmore’s generation had tried to rebuild a bulwark against Legal Realism; CLS wanted to make sure that didn’t happen.
A paramount issue was the question: what should the sequel to nineteenth-century legal science (“doctrinal formalism”) be? The point was a contest over the method of reasoned elaboration: the purposive interpretation of law in the vocabulary of impersonal policy and principle. The mainstream schools of legal theory—philosophies of right and justice, law and economics, legal process—tried to ground this analytic practice at a moment when its assumptions were already ceasing to be credible. The point was to argue for another future for legal analysis.71
Some of its adherents—but by no means all—credit CLS with the success that the legitimacy of conventional American legal practices has never been reestablished. I would be inclined to attribute this to Legal Realism, but if the advocates of CLS simply mean that they renewed the insights of Legal Realism against those in Gilmore’s era who tried to fashion a post-Realist jurisprudence, perhaps they are right in their claims. CLS was always redefining itself to avoid its critics rather than answering them; for example, to the charge that the movement had collapsed by the mid-90s, Unger rejoined, “Those of us who called it a movement did not intend to establish a permanent genre or school of thought but rather to intervene in a particular moment, in a particular direction.”72
Were there few constructive ideas? CLS was therapeutic, not constructive. The very suggestion that they should have a replacement for the conventional practices was a contemptible affront, an insidious effort to co-opt them into reforming an irremediable enterprise.73 Were the leaders a bit too elitist, too upper-middle class, too interested in good restaurants, for the masses whose interests they claimed to champion? They weren’t Leninists, for heaven’s sake; rather, a new organized “left bourgeois intelligentsia” that would one day merge with an unspecified mass movement to initiate “the radical transformation of American society.”74
But to note these aspects of the movement misses its appeal to my generation. In the first place, CLS’s leaders had considerable gifts at doing the doctrinal analysis that their predecessors thought so essential. As Daniel Markovits later observed,
[E]ven as its practitioners deny that doctrine can decide cases, they retain a formalist’s aesthetic love of doctrine (something lawyer-economists almost at once abandoned). If one looks at Unger’s “The Critical Legal Studies Movement,” [and one might add, Duncan Kennedy’s “Form and Substance in Private Law Adjudication,”] one finds page after page of genuinely first-rate private law doctrinalism, just aimed [in] a direction almost exactly opposed to the one that traditional doctrinalists pursue.75
Moreover, its leaders had for many of my contemporaries a charm and rebellious attractiveness. Kennedy himself was an irresistible Pied Piper for some students (though an equally irresistible target for their professors). This was the advice he gave to students who regrettably went to large law firms: “[Resistance] means engaging in indirect struggle to control the political tone of the office, say by refusing to laugh at jokes. Blank expressions where the oppressor expects a compliant smile can be the beginning of actual power.”76 It is hard not to see why some at the time linked such advice—as opposed to trying to persuade, by example, young lawyers to abandon their customary milieu in favor of living with the poor—to the prep school student, home on break, who tries to shock his parents’ friends at dinner.
While CLS built a large body of scholarly work that was heavily freighted with inherited jargon—“fundamental contradiction,” “false consciousness,” “counter-hegemonic consciousness,” “ideological state apparatuses,” even “deviationist doctrine”—it needed the adroit elusiveness of its own Jack Flash, a rhetorician with considerable terpsichorean skills. To float like a butterfly escaped from the chrysalis of the leaden law school, to sting like a bee—see Mark Tushnet’s acid attack on Laurence Tribe77—CLS had to have the dance step of a Duncan Kennedy. Although convinced that the existing legal and social arrangements should be free of the hegemonies and hierarchies that currently prevailed—Kennedy proposed that salaries for janitors and law professors be equalized,78 that students be admitted to the most prestigious law schools by lottery79—CLS exponents’ influence derived in great measure from their seizure of the commanding heights of tenured professorships at the Harvard Law School, whose position at the apex of legal education they gleefully exploited. Blocking appointments on political grounds,80 vituperatively attacking colleagues in print, persuading the law reviews to accept submissions they merrily called “trashing,”81 CLS seemed, for a time, where the future of law, or at least the study and analysis of law, lay.82
But CLS, while it offered a generation hope that a change of consciousness would open up as-yet-undetermined ways of avoiding the delicate balancing of values that the preceding generation had cultivated within the walled garden of its privileges and its power, was never able to deliver on its promise. How was consciousness changed by lawyers and judges if not by law? What did a change in consciousness amount to if its vision was not secured by laws? The fatal blows to the movement were delivered by other, more authentic movements—feminist and race theorists who had no trouble finding an Archimedean point on which to base their preferences and lever the society. CLS’s principal theorist, Roberto Unger, seeking just such a fixed point in the widening gyre, had ended his most famous work with the plaintive, “Speak, God,”83 but the feminists and race activists did not need any divine confirmation. Indeed, what they sought was confirmation of their victories in the courts and legislatures—that is, they sought the very imprimatur that CLS was busily trying to discredit.
A young professor at the Yale Law School, Arthur Leff, answered Unger with a witty reply in the form of a memorandum from “The Devil.”84 Leff, a commercial law scholar, had been plucked from obscurity by Gilmore and brought to New Haven. He saw clearly that neither Law and Economics nor CLS could validate itself in the post-Realist environment without privileging its own normative assumptions (whatever the merits, and these were disputed, of their descriptive projects). Without some external referent—without God’s guidance—all our normative systems and intuitions were contestable, and if the contest was to be waged by legal argument, then the indeterminacy at its core that the Legal Realists had identified made the entire enterprise a bad joke. Of course the Crits saw this; that is why they claimed it didn’t really matter that they had nothing constructive to replace the system they trashed. The difficulty they encountered was that while they were reassuring themselves on this point, the Law and Economics movement was putting judges on the bench, writing deregulation into statutes, and resolutely replacing the liberal state’s hostility to the unregulated market with deference to markets untrammeled and undisturbed by law.85 Things were changing all right, but not in the direction CLS had anticipated.
Leff skewered the Law and Economics movement for its counterintuitive pyrotechnics. Wherever Posner found an inefficiency and mocked the ineptitude of a rule, Leff simply asserted a different value being maximized.86 How could Posner say he was wrong if the ultimate test was what society actually does? Even if it was accepted that the common law could be explained as the result of unconscious, perhaps genetically driven impulses to efficiency—a bizarre marriage of Richard Dawkins and Ayn Rand, a union one would not want to visualize—this did not provide the basis to find an ultimate warrant for efficiency as the touchstone for justice (though Posner once claimed that justice simply was efficiency87). As CLS had shown in the discrediting of the liberal state, a mere practice could not provide justification for itself. The problem with the Law and Economics movement was that it wasn’t conservative enough. It had nothing to say about the values of decency, modesty of ambition, deference to tradition, reverence for sacrifice, privacy, loyalty, courage, fidelity, or even simple honesty. It might be possible to link these to efficient outcomes—and if anyone could do it, the artful Professor Posner was the person—but there hardly seemed any necessary link and there were many obvious counterexamples. The problem with CLS was that, for all its defiant poses, it wasn’t radical enough. It began as a Marxist movement just when Marxist regimes were being dismantled, wall by wall, barbed wire and all, in revulsion by those very persons they claimed to serve—persons who, “it turned out,” preferred a liberal state. CLS then attempted to transform itself through dalliances with existentialism, decisionism, structuralism, and eventually post-modernism, chasing the avant garde and arriving only to find its new partner was already passé.88 Building on the powerful insights of Legal Realism, CLS added little insight of its own. It was foreordained, perhaps by their common lineage to Legal Realism, that the movements would merge; and this happened in the person of Richard Posner himself, who became the last Crit, denouncing the pretensions of legal argument to form any structure of meaning beyond the service of power.89
Leff died at the age of forty-six in 1981; an austere eulogy90 was written by Gilmore, who died the next year. Gilmore noted that Leff had devoted himself, in what were to be his last years, to writing a legal dictionary; Gilmore said this was a project “that no one else would have thought of.”91 There are some obvious reasons Leff might have set out to do it. It might have made his family some money. It was an open-ended outlet for his wit and clarity.92 But was it not also a bulwark against his despair? For what the debate after Legal Realism ignored were the words, the legal concepts and doctrines we employed, deployed, criticized, rejected, refashioned, that had a legitimacy all their own. This wasn’t justification—perhaps we still needed God for that—but it would allow us to go on. It didn’t require that we throw away the ladder by which we had emerged from feudalism.
“Law and ___” had implicitly disparaged such an enterprise, even while it paid it the false and sometimes smirking homage of claiming to “explain” it. Yet as one of the most incisive American literary critics once wrote,
Some critics make a new work of art; some are psychologists; some mystics; some politicians and reformers; a few philosophers and a few literary critics altogether. It is possible to write about art from all these attitudes, but only the last two produce anything properly called criticism; criticism, that is, without a vitiating bias away from the subject in hand. The bastard kinds of criticism can have only a morphological and statistical relation to literature: as the chemistry of ivory to a game of chess.93
To suppose, however, that the Law and Economics and Critical Legal Studies movements would appreciate that the source of their enthusiasms was also the source of their ultimate sterility would be to ascribe to them a depth of self-reflection even greater than the insights they ascribed to themselves.
V.
The hunger for a validating foundation for law made an equally great impact outside the academy in the efforts of Congress, the regulatory agencies, and the judiciary to reduce the discretion exercised by officials. As a prominent Realist judge, Charles Wyzanski, put it: “Choosing among values is much too important a business for judges to do the choosing. That is something the citizens must keep for themselves.”94 If, as the Crits had argued, “who decides is everything, and principle nothing but cosmetic,”95 then reducing the scope for decision by officials, toadying or otherwise, was a vital step in assuring fairness. If, as the Law and Economics movement had claimed, all decisions could be reviewed by the application of a discretionless, even mathematical, analytical rule, then repeated layers of review would eliminate the idiosyncratic and arbitrary, refining decisionmaking to that which most closely hewed to the calculus of efficiency.96 The importunate and acerbic guests from Legal Realism that demanded a foundation for law do not depart if their demands are not accommodated, yet the result of trying to satisfy them is to live with their desires rather than our needs.
And so it has proved. After all, if there was no warrant for the assertion of particular values by judges, this was certainly true for less exalted figures like teachers, or policemen, or doctors. Whereas Gilmore’s generation had tried to rescue common law notions of reasonableness, duty, consent and the like from the corrosive acid of Legal Realism that exposed their biased, unreflective and often contradictory precedents, the next generation struggled to find a technology of decisionmaking that would eliminate or at least minimize these flaws.97
It was already apparent that deep trends were developing in American law that would move its orientation away from the interest of groups, with which it had been concerned since the Civil War—racial and ethnic groups, unions, political parties, sectarian organizations, the underprivileged and the marginalized—to a greater focus on the individual. Initially this was manifested in a “rights revolution” wherein the interests of groups against the state were vindicated through individual lawsuits. But it is now becoming clearer that something more fundamental was at work, something of which CLS and the Law and Economics movement themselves were mere epiphenomena.
No doubt the most controversial of the Supreme Court’s decisions at the time of Gilmore’s lectures was Roe v. Wade,98 which upheld the right of women to terminate their pregnancies. Here the rhetoric of rights proved problematic, however, as a broad political reaction arose that asserted the rights of the unborn, a group at least as vulnerable and underrepresented as pregnant women. One way to resolve this tension was to shift the spotlight from groups to individual persons. Three years after the lectures, the Court overturned an important rights precedent that had held corporations liable for the disparate racial impact of their hiring policies; henceforth, actual discrimination against the individual plaintiff had to be proved.99 By 1995, the Court was holding unconstitutional the common practice of minority “set-asides”—a means of assuring that a certain percentage of contracts went to vendors from certain recognized racial or ethnic groups.100
Perhaps this trend to empower the individual reached a turning point with the reversal of the right/privilege distinction. In an 1892 case brought by a Boston policeman who was fired for the expression of his political views, Holmes had written that the “petitioner may have a constitutional right to talk politics, but he has no constitutional right to be a policeman.”101 While courts had accepted claims based on an abuse of discretion by officials, the presumption lay in favor of the person exercising official responsibility. Now, widespread skepticism of, even hostility to, duly constituted authority replaced the deference of earlier generations as the public was persuaded that government officials had deceived them—especially regarding the Vietnam War—and were in collusion with powerful interests to preserve unsafe automobiles, pervasive pollution, and a rigged political system that favored incumbents and suppressed challengers.
In the midst of the war, five high school students in Des Moines, Iowa were suspended after they wore black armbands to class in protest. The Supreme Court announced that public schools should not be “enclaves of totalitarianism” and held the suspensions unconstitutional.102 I doubt the Court realized that this decision protecting nonverbal forms of political action would lead, eventually and perhaps unavoidably, to the evisceration of campaign finance laws and the holding in Citizens United v. FEC that the Congress could not regulate private funding for political campaigns because campaign contributions—like other non-verbal demonstrations—were actions protected by the First Amendment’s bar against laws abridging free speech.103 For in a political environment dominated by expensive media campaigns, who can deny that once the limitation on the First Amendment to the spoken or written word is dispensed with, the checkbook of the millionaire speaks at least as formidably as the armband on an adolescent? If we weren’t willing to trust the discretion of highhanded school administrators with an alleged taste for the “totalitarian,” why would we trust the Congress, an ongoing class reunion of politicians, to set rules for the behavior of those who wished to unseat them?
A few years after the Des Moines case, four students attacked a school security officer after he intervened to halt a brawl in the lunchroom. The school principal, who herself had witnessed the incident, promptly suspended the four students. But the Supreme Court reversed the suspensions holding that the status of being a student was a protected property right within the Fourteenth Amendment.104 That same year, the Court held that students who had been suspended for spiking the punch at a school dance could sue school administrators for monetary damages for the violation of their Fourteenth Amendment rights to a hearing.105 Subsequently, the Court extended similar rights to government employees who faced termination.106
One impact of what Henry Friendly called a “due process explosion”107 was to invite protracted and costly jury trials—or the threat of jury trials—where institutions and governments had to justify their decisions. Increasingly these institutions sought to avoid making discretionary choices they feared might be costly to defend.
The fear of costly litigation infected many ordinary daily decisions. It wasn’t simply that persons involved in the administration of schools, hospitals, churches, parks, and sports leagues suddenly faced frivolous and yet expensive lawsuits; it was that the consciousness of the ordinary person who had had little to do with lawyers now felt a threatening, litigious presence in the background of everyday life. It was often reported that a significant share of medical expense went to unnecessary, defensive tests,108 and the popular press delighted in reporting absurd tort cases.109
Holdings protecting the Fifth Amendment rights of criminal defendants to remain silent and the Sixth Amendment right to counsel110 were added to the Supreme Court’s controversial exclusionary rule. The exclusionary rule held that evidence improperly collected—without a valid warrant for example—could not be constitutionally introduced at trial.111 Soon, criminal trials were chiefly about criminal procedures which were in turn chiefly about the application of constitutional rules. The aggressive defense of defendants meant guilty, often dangerous defendants were acquitted on what were obvious “technicalities,” i.e., flaws in the investigation and prosecution of the case that did not relate to the guilt or innocence of the defendant. A prominent lawyer and public interest advocate, Philip Howard, concluded, “[i]n the place of officials who had been unfair, [now there were] self-interested individuals [who] bullied the rest of society.”112 As a result, “[r]ace relations were strained, government unresponsive, schools unmanageable and [criminal] justice perceived as a game.”113
Whatever the effects on institutional practices, the consequences of these developments for the standing of lawyers and the legal profession were catastrophic. While the number of lawyers doubled in the quarter century after Gilmore’s lectures,114 their standing in the public eye plummeted. In 1977, the Supreme Court handed down its decision in Bates v. State Bar of Arizona115 striking down a ban on advertising by lawyers. Holding that such advertising was commercial speech protected by the First Amendment, the Court held that the public’s access to information about the pricing and availability of legal services outweighed the Bar’s desire to maintain an image of professionalism. “Bankers and engineers advertise,” Chief Justice Burger wrote, in a remarkably obtuse observation, “and yet these professions are not regarded as undignified.”116
Just how far the public perception of such a change in the role of lawyers went can be seen eight years later when the Supreme Court handed down Supreme Court of New Hampshire v. Piper in 1985.117 Kathryn Piper was a lawyer who lived in Vermont but wanted to practice law in New Hampshire, as she lived quite close to the state line. She submitted her application to the New Hampshire Bar Examiners, took the bar exam and passed but was then informed that she would have to establish residence in New Hampshire before she could be sworn in.118 It had been assumed that, at least for the purposes of Article IV of the U.S. Constitution, states had considerable leeway in setting the requirements for the offices of state, which were distinguished from mere businesses.119 Nevertheless the Court had little difficulty in identifying the lawyer’s role as essentially that of a market participant, and struck down the New Hampshire requirement of residency.120 The notion of the attorney as an “officer of the court” seemed quaint.
It had long been an open secret that law firm partnerships were becoming rarer and more tentative when they were awarded. Partners didn’t expect to stay with the same firm for an entire career and firms didn’t commit to retaining partners in whom they lost confidence as generators of profits. More adversarial relationships seemed to prevail among lawyers even outside the courtroom, depositions dissolved into efforts to intimidate and humiliate, and the Moloch-like rule of billable hours seemed to taint all participants who sacrificed, and were sacrificed, to it. Deborah Rhode, the director of the Stanford Center on the Legal Profession, has recently described a deep dissatisfaction throughout the legal profession that, she has concluded, is reflected in the high rates of stress, depression, and substance abuse reported in numerous surveys.121
From the protectors of litigant’s rights, lawyers came to be seen as hectoring tormentors when these rights were no longer perceived as reasonably limited. But who was to say what was “reasonable”? Judges had been doing that—the “reasonable man” appears as often in judicial opinions as a butler in English country house mysteries—but something had changed. We no longer believed that the reasons judges gave for their rulings accurately reflected the true grounds for their decisions.
Partly this was the result of the late nineteenth and early twentieth century dethronement of the autonomous mind, a revolutionary defenestration as to which the Legal Realists had played the role of enthusiastic Jacobins. Minds, judicial or otherwise, were no more than brains, subject to the vagaries of billions of chance, evolutionary twists of the helical ascent by which man had abandoned his brother the chimpanzee; minds were “conditioned” by class preferences and the cultural hegemony of ruling groups; minds were unconscious, pushed by the lingering effects of unrecognized and distant traumas, pulled by the attractions of pheromones and artfully shaped chrome automobile grilles.
But mainly, the discrediting of judicial autonomy sprang from the same origin as the discrediting of the autonomy of law itself, the move from observing that law was no more than whatever the judge said it was to the demand that we find out just what was motivating judges if it was not the reasons they gave for their rulings. Thus was the green apple of self-knowledge cultivated by the Legal Realists and consumed by the Republic. If the liberal state’s balancing of interests was the death of Reason, as Duncan Kennedy liked to quote—the death, that is, of Formalism—then what demonic forces were alive and calling the shots?
VI.
This inquiry led directly, even inescapably, to one of the most insidious habits loosed on the jurisprudential scene, partly by journalists and politicians, but also partly by law faculty and practitioners. This is the practice, sometimes accompanied by a sneer, that always and only characterizes the truth or falsity of a legal conclusion as the equivalent of an analysis of the person asserting it.122 Any notion that law was a matter of obligations and duties was dismissed on the grounds that its purpose was simply to validate the corporate system—both the CLS and the Law and Economic movements seem to agree on this.123 Legal analysis was properly then a kind of diagnosis of the prejudices and biases of the analysand, typically a judge. This approach assumed, blithely, that the analyst was free of bias and, more damagingly, replaced the rationale offered by judges with the alleged discovery of their emotional, political and cultural attitudes. That completed the journey begun by Legal Realism: it ended in a wilderness of mirrors where the judicial analyst was the analysand. Where once the Supreme Court reporter for the New York Times had refused his editors’ demands to say which President had appointed an opinion’s author and dissenters, it was now considered obligatory.124
This approach did, however, hold this promise: armed with the telemetry of a judge’s psyche (or political background, which to the commentator was about the same) the analysts, whether historians or journalists or law professors, ought to be able to predict not only the outcome125 but the rationale that served its purpose. But could they?
In 2008, the Israeli legal historian Assaf Likhovski undertook an extensive analysis of the methods used by the CLS historian Morton Horwitz to answer this question, “What factors influence judicial decisions?” After describing Horwitz’s efforts, and those he inspired, Likhovski concluded:
Whether we use the broad-brush Horwitizian approach to the history of judicial doctrines, such as the one applied in [Horwitz’s early work or] the more nuanced, complex, thicker, and culturally sensitive methodology used by Horwitz in his later work;a biographical approach focusing on specific judges rather than on the development of specific doctrines; a micro-history of specific landmark cases; or even the “hard” quantitative methodology so favored by political scientists engaged in the study of judicial behavior, we will never really solve the mystery and reach the promised land of certain answers to what is, ultimately . . . [an] interpretative pursuit.126
Which is to say that at bottom, these debates were about meaning, not about politics, for Legal Realism had demonstrated that simply following rules of precedent did not yield consistent and comprehensive meaning. This disenchantment not only tore at Formalism, it set the terms of whatever was to succeed Formalism. There must be some external, objective, determinate way to choose which rule to follow. So thought Posner, but also Unger and also Leff.
Were they right? Judges seem to report a feeling of compulsion for most cases and, most of the time, agree across party lines.127 In this past Supreme Court Term, nearly half of the cases were decided unanimously128—and these were cases of sufficient difficulty to have made their way through the appellate process. And yet it was child’s play—or perhaps adult’s play—to show that there were often alternatives. What was going on? How could we reconcile the self-conscious, subjective reports with the analysis of judicial behavior that did not fall into predictable political, or sociological, or psychological patterns?
Just suppose that Legal Realism and Formalism are two different reactions to American law that depend upon a shared expectation. That expectation is that a legal rule is either true or false depending on its relationship to a fact in the world. The Formalist asserts a legal rule is true when, for example, it corresponds to a fact such as those asserted by modern microeconomics, or reclaimed by a study of the original intentions of the Constitution’s ratifiers, or commanded by the text of the Constitution, for example. The Realist looks at law and, finding a mass of contradictory or potentially conflicting statements, concludes that a legal rule can have only an arbitrary correctness. For the Legal Realist, the legal facts of the world to which the Formalist would adhere—sovereignty, or negligence, or consideration—are no more than conclusions that obtain whenever a court says they do. Insofar as legal rules purport to be about the world of facts, they are illusions.
These two temperamentally opposite reactions share the assumption that a legal rule is a proposition of law and, perhaps for the law graduate about to take the bar exam, this is true. But is it true of a judge who is commanded to follow a legal rule? I would say that insofar as a legal rule is used to resolve and offer a rationale for the resolution of a case, it is not a proposition of law at all. Don’t mix a decent Scotch with Coca-Cola, don’t strike a woman, don’t use racial or ethnic epithets, don’t curse in front of a child, don’t wear brown shoes to an evening dinner party, don’t disengage the clutch while making a corner—these are all rules for behavior but they are not propositions. They are things that we—we who aim to be respected by our friends, taken seriously despite all evidence by our families—things that we who know better, would not be caught dead doing.129 Some are trivial, some are essential, but all are contingent. “To demand more than this is perhaps a deep and incurable metaphysical need; but to allow it to determine one’s practice is a symptom of an equally deep, and more dangerous, moral and political immaturity.”130 Perhaps the greatest contingency is the human conscience—reflecting in part and often unpredictably countless habits and cultural practices—but recognizing this should not make us any less faithful to our consciences.131 Indeed recognizing the limitations of justification should not diminish by one iota the legitimating function of our practices, when these are structured by the rules of the game. After all, a roll of the dice will never abolish chance,132 and card play can never repeal uncertainty.133
If this is right, then the way a judge reaches a decision is almost beside the point; rather it is the way she explains it that counts. After all, it is the rationale that will serve as the basis for future decisions, not simply the outcome vis-à-vis the parties. A holding that was reached by secretly flipping a coin but is explained by a persuasive rationale is sufficient; a holding that is reached by conscientious and even agonized soul-searching but explained unpersuasively is not. What makes the rationale persuasive requires a bit of training and cultivated thought. To the layman, all legal opinions will appear to be an arbitrary series of choices. But to a judge working within well-defined conventions of legitimate argument, the application of a legal rule will often appear to be determined for her. This partly explains why constitutional law professors are badgered at cocktail parties by trusts and estates lawyers who deplore the lack of rules by which the Constitution is construed but take umbrage at any similar slight regarding the interpretation of the codicil to a will.
VII.
At about the time the CLS and the Law and Economics movements were gaining preeminence in the legal academy with their sustained assaults on what Unger described as a methodological consensus in law schools,134 another approach—more radical in its way than either movement—made its initial appearance in constitutional law. While those movements sought to discover new truths about the law, this approach attempted to gain a clearer view of what we already implicitly knew about it. While they depended upon attaining new perspectives free of the confines of law itself, this approach depended upon achieving a more perspicuous account of the structure of our arguments, the medium in which we actually do law.
This approach studied the “methodological consensus” not to de-legitimate it, but to determine how legitimacy was maintained, as it was generally felt that legitimacy was precisely what methodology lacked. As Henry Hart had confessed, the legal process does not provide any justificatory underpinnings;135 it may be a “thrilling tradition,” to some at any rate, but like other traditions it can be employed on behalf of unjustifiable ends.
With respect to constitutional law, this approach focused on the claim that “all legitimate constitutional argument takes the form of one of six modalities: appeals to the text, to structure, to history, to precedent, to prudence (or consequences), and to national ethos.”136
Sometimes called a “modal” approach, part of its usefulness was that it laid bare the self-replicating set of practices that were the basis for American constitutional argument. It showed that the American methods of constitutional argument were self-legitimating in the sense that their legitimation arose from the repeated acts of practicing those methods to resolve and explain cases, practices that had deep roots in the much older English common law tradition. As Gilmore noted in his “Age of Discovery,” the Americans took on more or less wholesale the means of analysis used by the English common law.137 What he did not say, and what is at least as important, is that these means were then applied in the United States to the law of the state; that is, when the state was put under law by means of a written constitution, the methods for construing that constitution were those hitherto used to construe wills and deeds, writs, and judicial opinions. Thus this approach proved to be a clarifying way of analyzing almost any constitutional issue from a legal point of view.
Moreover, this approach also allowed different scholars and jurists “to define themselves (and, equally importantly, to define others) in terms of their favored modalities of argument. . . . Thus [these] modalities not only characterized different forms of argument, they also characterized different forms of scholars, and different forms of scholarship.”138 The law journals began to publish articles wholly devoted to the nature of originalism, or textualism, and so forth; rich and insightful books began to appear on these subjects, too.139 This perspicuity, this clarity, brought new light to some longstanding controversies.140 Of greater importance, however, was the notion that such an approach empowered persons other than judges. This was helpful for those questions, some of which are discussed below, that are important constitutionally but are not justiciable. Instead of wringing our hands because there was no case on point or, worse, drawing the conclusion that there were as a consequence simply lawless zones where the Constitution did not apply—‘the standards for impeachment are whatever the House thinks they are,’ is one deplorable example—we now had at hand methods to resolve constitutional questions in the absence of judicial opinions. Indeed, “many social movements have reformed our institutions by daring to interpret the Constitution for themselves and by persuading others that their constitutional vision was the correct one.”141
Yet in a legal world of differing and sometimes conflicting modal answers, how were we to resolve such conflicts? Which forms of argument trumped others? And if conscientiously following the modal forms assured legitimacy, what claim did this practice make for justice?
My own answer is that there is no hierarchy of modal forms but rather than this being a cause for despair, it opens up a space in constitutional decisionmaking for the role of the individual conscience. The justice of the American system is not that it corresponds, or can be made to correspond by main force, to an external notion of “the just,” whether Platonic or Marxist or what-have-you, but rather that it compels a recourse to conscience, which of course may be informed by our religious or philosophical or political convictions. It is true that constitutional theorists have, over the years, tried to create a system of constitutional interpretation that maximizes legitimacy by minimizing discretion;142 it is also true that the modal approach which has such promise has not, I regret to say, deterred them.
VIII.
One subject has entered the canon of American law in the last decade, a subject that Gilmore had not anticipated. This is the “strategic turn” in constitutional law,143 which brought squadrons of mild-mannered law professors to the task of integrating the subjects of national security—defense policy, intelligence collection and analysis, diplomacy, and strategy—into constitutional law and international law and even sought to integrate the jurisprudence of these two disciplines.144 When Gilmore gave his Storrs Lectures, the importance of national security as a fundamental driver of the evolution of law was quite generally neglected; marginal subjects such as “national security law” dealt with the statutory frameworks for regulating the intelligence and defense agencies, or, at most, civil liberties litigation that attempted to frustrate executive authority. Few in the 1970s would have suggested that the U.S. Constitution was principally the result of a widespread concern among the Framers for the security of the American state. The preservation of slavery or the economic hegemony of the Founding Fathers were said with a knowing smile to be more probable causes of the movement toward a new constitution in the 1780s. Europe and its predatory empires were rarely mentioned as having anything to do with the founding of the American constitutional system. Nor did many law teachers treat international law as mainly driven by its interaction with war and conflict.
Perhaps the strategic turn was precipitated by the attacks on September 11, 2001; the sense of invulnerability Americans had hitherto enjoyed was breached (even if the actual threat was far less than that endured during the Cold War). I am inclined to think, however, that the intellectual foundation for this change had been building for a long time and that it arose in part from the experience of law professors and lawyers who served in national security posts within the U.S. government. On returning to private life or the academy, it was obvious to them that just as law teaching had increasingly walked itself into a cul-de-sac, isolated from the practice of law, so had it wandered away from the real drivers of state formation.
Not long before this, I was approached by a colleague of mine, a tax professor who had developed a keen interest in the origins of the U.S. Constitution. “Do you know why we have the Constitution we have instead of the Articles of Confederation?” he asked. “Yes, I think so,” I replied. “What do you think?” I said. “Taxes!” he answered. And I said, “Yes, that’s right. But what were the taxes for?” At the time, no one focused on this answer, but the correct answer was “War.”
The strategic turn among the law professoriate was generally to the good. Constitutional law had become wholly distorted, driven in part by the due process explosion into an obsession with constitutional litigation and criminal procedure, heedless of the source of constitutional law in the origins of the state. Langdell also had a hand in this neglect. For while the case method he introduced made a good deal of sense for the study of contracts, property and torts—common law subjects that were developed by judges—it made much less sense with respect to constitutional law when most of the law was not made by courts, or with respect to international law when almost nothing of significance had its final outcome in a court case. Generations of students were taught that constitutional law was principally a matter of judicial review because that was where the cases were, and cases were what the Langdellian casebooks contained.
Such casebooks, collecting and editing appellate judicial opinions, are not of much help for some of the most important constitutional questions. Among these questions are: what are the standards for impeachment; can a state within the Union secede; can the President obligate the United States to pay a debt prior to the consent of Congress, as in the purchase of Louisiana from France; must the Executive seek a declaration of war before initiating hostilities or entering a belligerency; what are the standards for the consent to judicial or cabinet appointments; is the consent of the Senate necessary for the removal of a federal official confirmed by the Senate; and many others. None of these matters appeared in the casebooks of Gilmore’s contemporaries, which contained long and intricate discussions as to whether the Kitty-Kat Club of South Bend, Indiana could dispense with g-strings and pasties on its dancers and still retain the protection of the First Amendment.145
The outcome of the revolution in thinking about the security dimensions of constitutional law—a revolution that is still underway—has been largely salutary. Now students who are taught the Federalist Papers are not limited to No. 10, which was assigned little importance by the Framers, nor are they instructed to skip the first, most important papers on war, diplomacy, and the need for a strong security state.146 As to whether the legion of law professors who now fill the Op-Ed pages of our leading newspapers with their suggestions for Middle East peace, or the most efficacious approaches to Iran or China or Russia have made lasting contributions, they at least did little harm for they were not taken seriously except possibly by their authors, and may even have done some good by engaging students in what is very much a growth industry in the law.
IX.
One cannot speak of the “strategic turn” in constitutional law without noting the much more pervasive “empirical turn” that characterizes legal scholarship across the board. Gilmore identified the arterial flow from Langdell to Posner: it pulsed with the idea that law is a science, if not identical, at least similar, to the physical sciences whose prestige was already mounting in Langdell’s day. This simile gained plausibility in the case of Law and Economics because there was an additional step that brought greater verisimilitude: law is a social science, and the social sciences are like the physical sciences. When this comparison also began to fray, another step fortuitously appeared. Law used—perhaps even required—the methods of the social sciences, and the empirical and statistical methods of the social sciences were like those of the physical sciences. As one scholar observed, “The scientific method inherent in law and the social sciences offers a way of attempting to transcend mere personal values by providing empirically testable hypotheses . . . .”147 Leaving aside whether the findings of the social sciences are in fact reproducible and reversible like the findings of physics, it is indisputable that the attitude animating such a perspective has had an impact on the legal academy and its literature. Where once it was rare for those seeking jobs as law professors to have PhDs, now for the most competitive applicants it has become de rigueur, and these doctorates are overwhelmingly in the social sciences. This is not unrelated, I surmise, to the fact that the articles in law reviews are increasingly “scholarly, professional, technocratic and less imaginative all at the same time [as well as] long and complicated.”148 Surprisingly, to those who believe that the methodologies of the social sciences render their conclusions uniquely persuasive, the use of law review articles by lawyers and judges has plunged. About forty-three percent of law review articles (to make a statistical point) have never been cited in another article or in a judicial decision.149 It reminds one of the summary once given by Randall Jarrell of the criticism in the literary journals of his day. After observing that some of the best critics alive put most of their work in such magazines—Henry Monaghan comes to mind in the journals of today—Jarrell nevertheless concludes,
But a great deal . . . . is not only bad or mediocre, it is dull; it is, often, an astonishingly graceless, joyless, humorless, long-winded, niggling, blinkered, methodical, self-important, cliché-ridden, prestige-obsessed, almost-autonomous criticism.150
Such criticism, in trying to supplant the law with its own methodologies, seeks to move beyond the divisions and conflicts that are so endemic to the common law—of which our methods of constitutional interpretation are a descendant. But if it be conceded that law is not a social science after all, such writers demand to know how we could critically evaluate law without making it the subject of scientific methodology. Surely the data set and the regression analysis will do what law has been unable to do, achieve an irresistible consensus. We don’t argue about the speed of light or the molecular composition of water; why should we argue about whether law is doing what it is supposed to be doing, and how different rules would alter the way people behave in different circumstances? Surely we can at least agree that people ought to be able to register the preferences they in fact prefer (or report that they prefer) and that our laws should facilitate this.
You see, the point of trying to assimilate law into the social sciences by appropriating their methodologies is to escape the mire of our conflicting personal values. In this desire, its advocates are not so very different from Pareto or Holmes. I confess I am skeptical that running away from the expression of our values will in fact achieve consensus, even on the rather minor issues as to which we believe we have the greatest statistical certainty. There is no statistic in law that is not value-laden,151 because it is introduced for a purpose and that purpose is to vindicate our values. That doesn’t mean we can’t agree on facts nor does it mean that we can’t find ways to go forward even when we disagree about values. That, after all, is what law does: it allows us to go on despite our differences.
X.
I hope Gilmore’s masterpiece of irony and wit stays in print to delight future generations of students. If this happens, someday a student will read these words as distant in time from their writing as the writer is distant from the evenings when he heard Gilmore deliver the lectures almost forty years ago. What will happen in the interim?
I diffidently venture two guesses.
First, last Term (2012-2013), the U.S. Supreme Court handed down Shelby County v. Holder,152 a case involving a challenge to the Voting Rights Act, which requires “preclearance” by the Department of Justice for redistricting plans in certain southern states.153 Chief Justice Roberts, writing for the majority, observed that the factual assumptions on which the preclearance requirement was based—facts as to minority representation and voting in the states singled out—had dramatically changed since the adoption of the Act in 1965.154 Because the determination that these states remained subject to the preclearance requirement was based on those now superseded facts, the Chief Justice concluded that the continued application of the oversight provision was unconstitutional.155 There was simply no factual basis for departing from the constitutional principle that mandates the equal treatment of states. Putting aside the persuasiveness of this conclusion, it is not so very far from Gilmore—and Guido Calabresi, following Gilmore—who proposed that courts should exercise a common-law function when dealing with statutes.156 When the facts have fundamentally changed and when Congress has not been able to re-enact the statute whose factual basis has vanished, judges ought to simplify, cohere, even reject the ostensible commands of the law using the same methods of reasoned elaboration they have honed over the centuries of common law adjudication. If this does not happen, Gilmore and Calabresi argued, we shall all drown in the rising tide of defunct legislation that annually advances but rarely recedes.157
My second guess is no less radical, but I must advance it unfortunately without the imprimatur of my illustrious predecessors. The triumph of the years leading up to Gilmore’s lectures was the “nationalization” of American constitutional law. Now, a burglar caught in the act in Louisiana is read the same rights as if he were in New York or California. Now, the requirement for a warrant precedent to a search by the police is the same in every jurisdiction. Now, the standards for capital punishment, vain though they may be, are everywhere the same for the age of the person sentenced to die, his mental status, the crimes for which a capital penalty may be levied, even the methods that can be used.158 This was indeed a triumph, but it is not the end of history. We can already see the outlines of the next great constitutional transformation, as the American state evolves from one constitutional order, born in the Civil War, that attempts to reverse market decisions—for what greater expropriation of private wealth has there ever been in this country than that accomplished by the Emancipation Proclamation159 and the Thirteenth Amendment?—to another that attempts to use the market for the state’s political goals, and even to adapt its methods to governing. Whether the subject is conscription, marriage, women’s reproduction, or the deregulation of industrial practices, the United States is changing its basic orientation between the law and the market.160
This is one source of the increasing interest and impact of behavioral economics on law and other areas of public policy. Behavioral economists have long observed that in assessing the statistical risks of various activities the decisionmaker tends to be guided by an overall attitude toward the problem—an “affect heuristic”—that often skews their appreciation of the situation.161 Sometimes people think they are making decisions based on the facts, but are actually influenced by what other persons in their social networks believe—an “informational cascade” that leads to mass movements that are otherwise inexplicable.162 “Framing” behavior often superimposes a general approach to a problem based on stereotypes and anecdotes that have little basis or relevance. Appreciating that much decisionmaking is intuitive, rapid and emotional, rather than coolly calculating, deliberative, and logical, these economists, and the law professors who are their enthusiastic followers,163 suggest ways to counter these predictable biases and even to exploit them. If it is observed that persons tend to favor the path of least action—say, when opting in or out of an organ donor program—and we wish to encourage donations, then it makes sense to set the default action at opting in, requiring some particular act on the part of the decisionmaker to opt out. Or, at the very least, if we merely wish to enable people to make the decisions they thought they were making, and indicated when queried that they wanted to make, there are various techniques to assist us in overcoming the known biases and irrationalities that are commonly observed. For example, if a patient wishes to abstain from smoking or wants to contribute to a retirement account, there are “pre-commitment strategies”—think of Odysseus lashing himself to the mast so as not to succumb to the sirens’ song—that can help to accomplish these goals.164
Industrial nation-states, the exemplars of the constitutional order that arose in the second half of the nineteenth century and dominated the twentieth century, sought to tame the market by regulation, expropriation, intervention, and even direct participation through state corporations that ran telecommunications companies, national airlines, energy exploration and development, and much else. Now that constitutional order is waning, as market states begin to emerge. Thus, for example, we are moving from the state-owned enterprise to the sovereign wealth fund, as we go from trying to defy markets to using them to achieve our objectives.
This is quite different from the Law and Economics ideologies of Posner and his colleagues because it does not presume to set an overall political goal—efficiency—but either localizes political preferences in the individual so that the citizen stands in relation to the polity as a consumer stands in relation to the market, or assumes that there are some decisions that well-informed persons would always prefer.
Perhaps a more futuristic example will help to clarify the sort of techniques we can expect from an American market state. Consider the “nationalization” process I praised a few paragraphs above. We know from recent Supreme Court cases165 that the government may not restrict the possession of firearms for self-defense beyond some very rudimentary bounds, such as prohibiting minors and felons from having guns. But suppose a person wished to live in a particular apartment house or even a “gated” community. It seems clear that the co-op board or the shareholders could make it a condition of residence that no ammunition or firearms be kept within the premises they govern. They might say: you have a Second Amendment right to have a gun but you have no right to live in our buildings; if you wish to do so, you must abide by these non-discriminatory rules. This is an evasion of the Second Amendment by using the market and I would not be surprised if many communities found it attractive.
XI.
Of course, no one can really know the future because it moves away from us as we reach toward it—it does not become the present as we sometimes assume—just as, in a similar manifestation of a different phenomenon, the past is most influential when we try to run from it. I, no more than Gilmore, and quite possibly a good deal less than he, can confidently anticipate the developments of another forty years.
But I will risk this. It will be a period in the history of the United States shaped by law because law is the principal tool by which we Americans shape our destiny. If we are to become a worthier society we shall have to make our methods of creating and administering law far more effective than the plangent disharmonies that issue from Washington and most of the state houses around the country would suggest. If we don’t wish to attempt such difficult reforms—abandoning the 1970s’ two-track reform of the filibuster in the U.S. Senate that allows a few senators to hijack the legislative process and hold it hostage, comes easily to mind166—we may well be able to persuade ourselves it was for the better. The sacrifices and compromises we have avoided would probably have only made things worse, we may comfortingly conclude. The cost of such comfort is paid in our self-respect which, contrary to appearances, is not really exalted by intransigence and vanity.
Great states do not lose influence and their ability to direct their own affairs because it is fated that they must fall. The lives of states do not progress from birth to certain death as do the lives of men. Great societies lose their greatness when they lose confidence. For Americans that means the confidence that we can structure our fates through law, leaving the essential choices to the conscience of each person called upon to decide the question the law has posed.
As we enter the Age of Consent, the era of a new, already emerging constitutional order that puts the maximization of individual choice at the pinnacle of public policy, it would be well to appreciate the structuring role for choice that American law has always provided. Far from obviating the need for our consciences, our laws structure a necessary role for them. That highly structured role is reflected in representative government (rather than plebiscites), in the composition of juries (rather than mobs, even when they form over the Internet rather than outside a jail), in the belief in liberal education (rather than indoctrination), in the responsibility of judges and lawyers to shape as well as defend the Constitution that gives them unique power. Those structures will be strictly scrutinized in this era, as they should be. How else will these habits and practices find defenders unless they are convinced, after rigorous examination, that this way of structuring choice is worthy of defense?
For much of the twentieth century, however, these structures seemed to many quite tiresome, a series of barriers obstructing the way to fulfilling their desires most efficiently. Why not simply superimpose an ideology on American law and do away with all the contradictions and ambiguities that are so pervasive there? Yet ideology begets counter-ideology. Instead of expressing our values humbly but forthrightly so that we might argue for our convictions patiently while listening to counterarguments, a good many resorted to threatening or traducing those who disagreed with them, and our body politic became more riven and divided than before. In such a state, all that is indecent, intolerant, and ignorant steadily stained our public life, so that many turned inward in disgust and disillusion. It is little wonder then that, in the early twenty-first century, we find ourselves moving, apparently inexorably, toward the discrediting and perhaps even the dissolution of our constitutional structures. And this will seem inevitable because, after all, what could we do? Who are we to impose our values on others through law, when the market can so easily let every man be subject to his own values alone?
Yet until God—or the Devil—answers our interrogatories, we must press on in the ways our profession has taught us,167 not expecting our questions to be answered definitively before a Final Judgment is rendered. The many efforts to superimpose a single, comprehensive ideological framework on the American system of law are actually inconsistent with the legitimacy of that system; its internal conflicts and contradictions are what provide the space for the role of conscience that is the authentic genius of American law.
This essay began with the question, ‘What came next?’ The answer to that question is the same as the answer to Grant Gilmore, who wondered at the end of his lectures whether American society would become more or less just.
It has been struggle, and perhaps neither gain nor loss.168 For us, as for Gilmore, as for all those who submit their hopes to the discipline of an inherited but open-ended legal order, there is only the trying. The rest is not our business.169
Wherefore I perceive that there is nothing better than that a man should rejoice in his own works; for that is his portion: for who shall bring him to see what shall be after him?170
Philip C. Bobbitt is the Herbert Wechsler Professor of Federal Jurisprudence at Columbia Law School and Distinguished Senior Lecturer at the University of Texas Law School. I am grateful to Hubert Ahn, Akhil Amar, Louis Begley, the Hon. Guido Calabresi, Philip Howard, Sanford Levinson, Henry Monaghan, Travis Pantin, Dennis Patterson, the Hon. Richard Posner, Robert Quigley, and Roberto Unger for their comments and assistance. Any errors that may remain, despite their efforts, are my responsibility alone.