The Arrival of Artificial Intelligence and “The Death of Contract”

Technology Law, Ethics and Policy
Blog
aerial view of ottawa
Ian Kerr

Like it was yesterday (it was in fact 1990), I remember when I was a 1L studying the law of contract.

The inimitable G.H.L. Fridman stood front and centre at the podium, a little man with a big British accent and all pomp and circumstance of a royal coronation. There he was, ready to deliver an hour-long introduction to our year-long subject matter—a single sentence, it seemed, comprised of many subordinate clauses, peppered throughout by a series of parenthetical remarks.

I remember many of his quips from that lecture and those that followed. But the one that has stuck with me the longest was practically muttered underneath his breath. It punctuated the finale of his first but also his closing lecture:

“Of course, Grant Gilmore once noted: ‘We are told that Contract, like God, is dead. And so it is.’”

Fridman never went on to explain the remark. But he didn’t have to. Half an hour after that lecture, I was in the library signing out Grant Gilmore’s 1974 book, The Death of Contract.

In the span of two decades since it was written, the book had generated more than a little academic excitement. So much so that an edited and updated 2nd edition was published some 20 years later—just as I was finishing up law school. Gilmore’s book was the subject of numerous law review articles, examined by many of jurisprudence’s heavy-hitters, including Morton Horwitz, Robert Gordon and others. Although not so in Canada, The Death of Contract is still required supplemental reading in many US law schools.

To invoke Nietzsche and thereby declare the death of god on your own subject matter is no small thing. Neither is it a trifle for a serious scholar to speculate that contract law would be swallowed up by tort—“Contorts” as Gilmore liked to call it.

While this was all very interesting to a 1L who was also in the final stages of a PhD in the philosophy of law, The Death of Contract struck a different chord in me. In my view, Gilmore’s takeaway was that that the great evolutionary forces of the common law did not generate 20th century contract law as we usually think it. It was a reminder that Contract Law, like the contracts made through it, is a human artifact; a golem created from whole cloth by a few elite members of the judiciary and the academy.

In the very moment that legal academics started paying lip service to multi-disciplinary scholarship and this thing called the world wide web was being rolled out, I began to wonder: what would happen if contracts were no longer the work of humans but were machine-generated? What would happen if other artifacts created these artifacts?

This preoccupation of mine was a passing fancy. I finished-up law school and my PhD, and that was that.

But, as time passed, things became fancy again. Right around the bong of the millennial clock, when I was commissioned by the Uniform Law Conference of Canada (ULCC) to answer the question:

Can computers enter into contracts?

The question seemed very strange back then—at least to those of us with traditional common law educations like the one given by G.H.L. Fridman. After all, at its core, a contract is an agreement—a shared undertaking between individuals to enter into a legal relationship.

Computers don’t make contracts. People make contracts.

But the old road was rapidly aging. The success of e-commerce, everyone recognized, was premised on increasing automation. Pretty quickly, people on the web began to demand instantaneous responses, matchless memory and perfect performance 24-7—which meant removing the frontwoman, the middleman, really, as many people as possible, from any given transaction.

(It is interesting to see that the “live agent” has made something of a comeback in recent years—clearly a pushback against automation.)

And so the law needed to find a way to enforce bargains between machines and individuals where, strictly speaking, there was no consensus ad idem. This was crucial for machine-generated trades where there were no human eyes on at least one side of the transaction.

I remember the very spirited discussion that @johndgregory, me and other members of the ULCC E-commerce Working Group s would have about nomenclature for the software/hardware systems that made such transactions possible. Ultimately we chose the term “electronic agent.” A subsequent debate ensued about whether these electronic agents should be treated as mere instruments or as something more. In terms of our Working Group’s ultimate recommendations for Ontario’s Electronic Commerce Act, I was on the winning side of the first debate but lost on the second. In a later article (Spirits in the Material World), I elaborated on my position, arguing that a day would soon come when it would become pragmatic to think of electronic agents as exhibiting an intermediate ontology best dealt with by well established principles in the law of agency (an idea subsequently developed with much greater precision and care by Chopra and White).

What I saw perched on the horizon was the first real steps towards artificial narrow intelligence—mySimon, Kasbah and other early AI systems—where software agents are able to carry out transactional work completely independent of human intervention (comparative pricing, negotiating terms, buying and selling). Indeed, these AI bots and the much more powerful ones in use today can generate deals the particulars of which no human beings are ever aware of; in some cases, deals which were never specifically intended, foreseen, or authorized by their programmers or users.

These AIs are no mere instruments. They are not the dumb, static vending machines that are merely conduits through which purchases are made on fixed terms (think: Coke machine).

More to the point, they are not machine-generated unilateral offers of the sort contemplated by Lord Denning (♫ Master of the Rolls and Champion of Equity ♫) in Thornton v Shoe Lane Parking [1971] 2 WLR 585:

None of those cases has any application to a ticket which is issued by an automatic machine. The customer pays his money and gets a ticket. He cannot refuse it. He cannot get his money back. He may protest to the machine, even swear at it. But it will remain unmoved. He is committed beyond recall. He was committed at the very moment when he put his money into the machine. The contract was concluded at that time. It can be translated into offer and acceptance in this way: the offer is made when the proprietor of the machine holds it out as being ready to receive the money. The acceptance takes place when the customer puts his money into the slot. The terms of the offer are contained in the notice placed on or near the machine stating what is offered for the money.

One cannot, without great legal fiction, apply the same analysis to AI-generated contracts. Even if we were to decide on policy grounds to attribute contractual liability to individuals who use or program these bots, those individuals are not “offerors” in any known sense of that word. How could they be? Those individuals have no clue if or when the system will make an offer, no idea what the terms of the offer will be, and no easy means of affecting the negotiations or trade once underway.

AI-generated contracts of this sort problematize contract theory. I wonder whether Gilmore would view them as yet another nail in Contract’s coffin?

Of course, AI applications not only implicate legal theory but also legal practice.

On the market today are a number of AI products that carry out contract review and analysis. Kira, an AI system used to review and analyze more than US$100 billion worth of corporate transactions (millions of pages), is said to reduce contract review times by up to 60%. Likewise, a Canadian product called Beagle (“We sniff out the fine print so you don’t have to”) is faster than any human, reading at .05 seconds per page. It reads your contract in seconds and understands who the parties are, their responsibilities, their liabilities, how to get out of it and more. These are amazing products that improve accuracy and eliminate a lot of the “grunt work” in commercial transactions.

But hey—my Contracts students are no dummies. They can do the math. Crunch the numbers and you have a lot of articling students and legal associates otherwise paid to carry out due diligence who now have their hands in their pockets and are looking for stuff to do in order to meet their daily billables. What will they do instead?

In some ways, such concerns are just teardrops in an ocean full of so-called smart contracts that are barely visible in the murky depths of tomorrow. Their DRM-driven protocols are likely to facilitate, verify, and enforce the negotiation and performance of contracts. In some cases, smart contracts will obviate the need for legal drafting altogether—because you don’t actually need legal documents to enforce these kinds of contracts. They are self-executing; computer code ensures their enforcement.

It is said that these AI contracts “create valuable trust.” But not in the way that traditional contracts do.

Historically, contracts generated trust through the moral institution of promise keeping. As Charles Fried famously argued:

The device that gives trust its sharpest, most palpable form is promise. By promising we put in another man’s hands a new power to accomplish his will, though only a moral power: What he sought to do alone he may now expect to do with our promised help, and to give him this new facility was our very purpose in promising. By promising we transform a choice that was morally neutral into one that is morally compelled. Morality, which must be permanent and beyond our particular will if the grounds for our willing are to be secure, is itself invoked, molded to allow us better to work that particular will. Morality then serves modest, humdrum ends: We make appointments, buy and sell, harnessing this loftiest of all forces.

Ethereum and other block chain based AI platforms that permit self-executing contracts are said to circumvent the need for making or keeping promises. As such, they create “a world where specific performance of contracts is no longer a cause of action because the contracts themselves automatically execute the agreement of the parties.” Some say that a further consequence of this is that, “[s]omeday, these programs may replace lawyers and banks for handling certain common financial transactions.”

Although there are some amazing elements in this that could very well revolutionize commerce and government much for the better, I have argued elsewhere about the evils of digital locks and the permission culture that they help to generate. We had better be careful.

My point here, though related, is a different one.

My takeaway is that common law is not the only force that has dealt blows to traditional contract doctrine. AI and other emerging information technologies further challenge the notion of contract as a consensus-based agreement between individuals.

Returning to his The Death of Contract, Gilmore envisioned a future in which case law doctrine would be dislodged from the core study of law. As Robert Gordon characterizes it:

The tone is elegiac—[Gilmore] seems to be saying: Many ingenious lovely things are gone.” He sees moving across the distant plain armies of sweating sociologists, clipboards and calipers in hand, to bivouac in plastic tents where pyramids once stood. And in fact we shall do well to worry if the new orthodoxies threaten to constrict our view of the world as narrowly as the old ones they are replacing.

I guess my tone is equally elegiac and I am not sure why.

On the one hand, like Ian MacNeil and others, I have always thought that Gilmore’s reports about the death of contract were greatly exaggerated. MacNeil spent much of his career arguing that contracts must be studied and understood as relations rather than as discrete transactions. Although his “essential contract theory” did not gather as much traction as he would have liked, the idea that people will stop entering into such relationships or that they will stop promising things to each other in ways that the law must respond to—even if “more honor'd in the breach than the observance”— is pretty hard to fathom. These are deeply entrenched social institutions and aren’t likely going anywhere anytime soon. Not without a serious amount of chaos.

On the other hand, I do think that the arrival of AI is further undermining core aspects of contract doctrine such as “agreement”, “consensus ad idem” and the “intention of the parties.” As I have argued elsewhere, we see similar challenges with a related series of concepts in privacy and data protection law.

My thinking about all of this has really only just begun. But I suspect we will face some significant changes and I am not sure that it’s all good. Self-executing contracts, like the DRM-systems upon which they are built, are specifically designed to promote the wholesale replacement of relational aspects of contract such as trust, promise, consent and enforcement. As such, they do injury to traditional contract theory and practice. While I have no doubt that an AI-infused legal landscape can to some extent accommodate these losses by creating functional equivalents where historical concepts no longer make sense (just as e-commerce has been quite successful in finding functional equivalents for the hand-written signature, etc.), I do worry that some innovations in AI-contracting could well have a negative effect on human contracting behavior and relationships.

One day, I hope to offer a fuller elaboration of why I think this is the case.

* Ian Kerr holds the Canada Research Chair in Ethics, Law and Technology at the University of Ottawa and is a founding member of the Centre for Law, Technology and Society, where he teaches Contract Law and The Laws of Robotics