AI Should Stand for Attorney Intelligence

This week on the Advanced Discovery blog at,    I wrote:

In Henry VI, Part 2, Shakespeare famously said, “The first thing we do, let’s kill all the lawyers.” Are computers doing just that? In June of 2012, the Wall Street Journal ran an article entitled Why Hire a Lawyer? Computers Are Cheaper, which reported on the extensive use of computers for pretrial document review in the Landow Aviation hanger rooftop collapses litigation. Specifically the judge in that case permitted the use of predictive coding for that review, a general term that at that time referred to programs that used algorithms to determine whether documents are relevant to a case. The incentive was the price savings over the then estimated cost of $1 per document for human review.


Since then of course, we have seen an explosion in the use of computers for document review under such titles asTechnologyAssisted Review (TAR), Computer Assisted Review (CAR) and Continuous Active Learning (CAL). This growth was perhaps best exemplified in a session at the 2015 Legal Tech conference called “Taking TAR to the Next Level: Recent Research and the Promise of Continuous Active Learning”.The session featured the following panelists: Maura R. Grossman and Gordon V. Cormack, two of the eDiscovery industry’s leading researchers in the field, who have published seminal articles on the product; U.S. Magistrate Judge Andrew J. Peck, who has issued Orders in two of the leading cases in the field; and John Tredennick, the CEO of software company Catalyst and a leading proponent of the use of CAL.

All this emphasis on technology reminds me of my old friend, the late BrowningMarean. He was a great fan of the writings of Ray Kurzweil, the technologist and futurist who wrote The Age of The Intelligent Machine. However, Browning’s favorite book of his was The Singularity Is Near: When Humans Transcend Biology, which posited that technological advances would irreversibly transform people as they augment their minds and bodies with genetic alterations, nanotechnology, and artificial intelligence.

In fact, a recent survey by Altman Weil of 320 firms with at least 50 lawyers on staff, found that 35 percent of the leaders at those firms (responding anonymously), see some form of AIreplacing first-year associates in the coming decade. Less than 25 percent of respondents gave the same answer in a similar survey in 2011. 20 percent of those same respondents said second- and third-year attorneys could also be replaced by technology over the same period, and half said that paralegals could be killed off by computers. (See graphic below)



However, I am more mindful of another tenet of the Singularity, that exponential increase in technologies will lead to a point where progress is so rapid that it outstrips humans’ ability to comprehend it. To me, we are losing sight of the proposition that people are slow and computers fast, but people are smart and computers are dumb.
Or as I said, after speaking immediately after Dan Katz on a panel at this conference last year, “Well enough from Skynet, now let’s hear from the Resistance.”

And some of today’s greatest minds in technology felt the same way. Stephen Hawking has stated, in an op-ed which appeared in The Independent in 2014, “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets.” His fear? As posted in a separate interview with BBC, it was simply stated: “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”

Hawking recently joined Elon Musk, Steve Wozniak, and hundreds of others in issuing a letter unveiled at the International Joint Conference Buenos Aires, Argentina warning that artificial intelligence can potentially be more dangerous than nuclear weapons. Even Bill Gates has expressed concerns and during a Q&A session on Reddit in January 2015, by saying“I am in the camp that is concerned about super intelligence. First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Sound far-fetched? Well then, consider it from our perspective as attorneys. What is the ethical dilemma of bestowing legal responsibilities on robots? Does not all this talk of AI and CAL undermine our ethical duties to manage our client’s matters if we don’t really understand how they work? Is Maura Grossman the only attorney using a computer to perform searches who is practicing law ethically?

As far back as 2013, Peter Geraghty (Director of the ETHICSearch, ABA Center for Professional Responsibility) and Susan J. Michmerhuizen (ETHICSearch Research Counsel) wrote an article for Your ABA Enews called Duty to Supervise Nonlawyers: Ignorance is Not Bliss. Although the article focused on issues with paralegals and support staff, I would suggest that computers also qualify as nonlawyers and the concerns mentioned in the article should apply to them and the technical experts who use them as well

They wrote, “Subparts (a) and (b) of Rule 5.3 (Responsibilities Regarding Nonlawyer Assistants) of the ABA Model Rules of Professional Conduct state:
With respect to a non lawyer employed or retained by or associated with a lawyer:
(a) a partner and a lawyer who individually or together with other lawyers possesses comparable managerial authority in a law firm shall make reasonable efforts to ensure that the firm has in effect measures giving reasonable assurance that the person’s conduct is compatible with the professional obligations of the lawyer; and
(b) a lawyer having direct supervisory authority over the nonlawyer shall make reasonable efforts to ensure that the person’s conduct is compatible with the professional obligations of the lawyer.”

In addition, while supervising nonlawyers, lawyers must take steps to ensure that nonlawyer staff understand their obligations to protect confidential client information. See ABA Informal Ethics 88-1526 Imputed Disqualification Arising from Change in Employment by Nonlawyer Employee (1988) (“Under Model Rule 5.3, lawyers have a duty to make reasonable efforts to ensure that nonlawyers do not disclose information relating to the representation of clients while in the lawyer’s employ and afterward”).

This issue arises constantly when vendors run computer searches of documents and then produce directly to opposing counsel. The non-supervised release of privileged material can be an enormous problem for a firm, so much so that Geraghty and Michmerhuizen noted an excerpt from Comment [3] to Rule 5.3, which states:
… Nonlawyers Outside the Firm
[3]A lawyer may use nonlawyers outside the firm to assist the lawyer in rendering legal services to the client. Examples include the retention of an investigative or paraprofessional service, hiring a document management company to create and maintain a database for complex litigation, sending client documents to a third party for printing or scanning, and using an Internet-based service to store client information. When using such services outside the firm, a lawyer must make reasonable efforts to ensure that the services are provided in a manner that is compatible with the lawyer’s professional obligations.

Keep this all in mind when retaining a technical expert. Do you really understand what they are doing? How much work being done by their computers are you actively supervising in a knowledgeable manner? The Qualcomm case is the standard for the proposition that attorneys cannot simply delegate to others, even their clients, the responsibility of understanding ESI and planning for ESI discovery. I would suggest that blindly relying on AI or other computer intelligence to make decisions does not rise to that necessary level of understanding.

Maura Grossman once remarked, “predictive coding is a process not a product”. Always remember that technology is a tool and humans use tools not vice versa. The ultimate decision-making about what tool to use, and how to use it, resides with you, the attorney. As technologist John Martin once said, “It’s the archer not the arrow.” So let us keep the attorney in AI.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.