July 25, 2021

With all the analytics companies being bought up recently, I made the above comment to Doug Austin in discussing his column a few weeks back on the new NIST standards for developing trust in Artificial Intelligence programs. ( )

But maybe the analogy isn’t a good one. A better one may be that this is a case where we DO want to see the sausage being made. I’ve talked about this before  ( ) but the NIST paper dives into some detailed analysis.

NIST talks about trust as a key element in getting lawyer buy in on AI.  True, but first it would be nice if vendors explained to us what they actually mean by AI. Keep in mind that eDiscovery is only one of many uses for AI. The ABA lists all the possible uses in legal as:

  • e-Discovery
  • expertise automation
  • legal research
  • document management;
  • contract and litigation document analytics and generation  and
  • predictive analytics.

(see the full article online at: )

So explaining how “revolutionary” or “groundbreaking” your AI is only helps me if I know specifically how it works in my particular use case.  Specifically works.

Think I’m exaggerating the definition problem? Here’s some examples of what vendors say about AI right off their web sites.

 AI is the next frontier.

AI is the future of eDiscovery

Our AI uses cutting edge artificial intelligence and machine learning.

Our AI gives extraordinary results.

AI … Believe the Hype

Embrace the groundbreaking magic of artificial intelligence with (name deleted)

Precise predictions in a fraction of the time required for traditional review

Infusing AI across the entire E-Discovery process

(name deleted) artificial intelligence capabilities are built on top of the latest innovations in Deep Learning, Natural Language Processing, compute intensive hardware processing and other related architecture approaches (editors note: “compute” is not a typo)

And my personal favorite

yeah … this is what the future feels like

Perhaps part of the problem is that people aren’t really sure how these programs work. I don’t say that to be critical but to point out how difficult this subject is. Tess Blair of Morgan Lewis does a wonderful presentation on the ethics of AI which has an equally wonderful overview of the history of the subject. In it, she has a slide with a quote from an MIT Technology Review article that says “No one really knows how the most advanced algorithms do what they do. That could be a problem.”  Duh!! Can’t put anything over on those Techies.

You can see her entire slide deck at .  I highly recommend you take a look at the entire deck.

In the article on the NIST standard which Doug posted, the author (Jim Gill of IPro) uses the ATM example. “I trust it because it always gives me money.”  But unlike eDiscovery, when I use the ATM, the results are completely predictable. I know I want 40 bucks, I know I have at least that much in my account, and I know that the if the ATM is working it will give me 40 bucks.

My trust in the ATM is well established because it is a simple system based on how much I have in my bank account. I know what the result is going to be, and I also know if I have a problem, I can call the bank and resolve it right away.

Perhaps that describes how lawyers work in eDiscovery more than we want to admit, getting what you already know is there, but is it really a test for trusting AI?  If I know the expected result, do I need AI? Wouldn’t simple searches help me find what I already know is the right answer? Sort of like the “Check your balance” key on the ATM. OK, yup I have 60 bucks. Give me 40. Thanks.

Many years ago, a colleague in the Federal Defenders office was being given a demo of a new review tool. After he saw all the searches and filters and reports he could generate, the rep said to him “So what do you think?’

He paused for a minute and said, “I think I need a button on the main screen that let’s me search for my clients name.”  He knew exactly what he needed from the documents when they first came in and wanted a simple way to get the result. Same thing.

The other analogy people like to use for AI is the comparison to on-line music search engines. I hate this one. Why? Because the music in these programs is specifically curated to give you the information you want.

“Curated”.  As in already searched, tagged, indexed and ready for you to scroll through. When Pandora got in the streaming business, they used what they called the “Music Genome Project,” a review of 700,000 songs, by 80,000 artists. The idea was to have musicians listen to and decode a song’s “DNA” and then categorize it according to different musical qualities such as meter and tonality. Numbers are assigned to the over 450 different attributes, and when you search, Pandora’s algorithm goes to work, finding other songs that match these same qualities.

Hardly describes the average eDiscovery project, right? Sure, it would be nice if that 2TB of data was already categorized and indexed when you start your search, but it isn’t.

Furthermore, what if they get it wrong?  What if they label Jimmy Buffett as country not rock?  For that matter, what is rock? Is it Kiss, the Stones, AC DC, Metallica, Gnarls Barkley, Tom Waitts? What about the Beatles? How does Yesterday stack up against Yer Blues or I Wanna Hold Your Hand against Golden Slumbers/Carry That Weight/The End? And what in the wide wide world of sports is Revolution #9? Someone run that one by Yoko.

Now Spotify uses a different approach, which is music liked by other users or from the data received from millions of music blogs which they call collaborative filtering. For new music, Spotify analyzes the audio itself, training their algorithm to learn to recognize different characters to music, such as harmony or distorted guitars.

Spotify recommendations start on the home screen with an AI system called Bandits for Recommendations as Treatments or BaRT, which is the alternative to playlists. Like Pandora, it will allow you to add songs you find to your “channel”, which is where they both are similar to continuous active learning.

I found Ry Cooder in the artist list, saw one of my favorite songs of his, Ditty Wah Ditty and created a “radio station.”  Spotify then showed my 50 songs, 7 by Ry and 43 others including Ooh La La by the Faces, Yes We Can Can by Allen Toussaint, Who Do You Love by Bo Diddley, Big Chief by Prof Longhair, Green Onions by Booker T and the MGs, Moondance by Van and Little Red Rooster by Howlin Wolf.  Huh?

Then I did a specific title search for “Happy”. Lots of results. Songs, albums, artists.  Lots.  But no Keith on Exile on Main Street.

Hmmmm. I’m thinking maybe I’m not on the same page with a lot of those 248 million users.

So, to return to the opening comment. Aren’t we looking at the wrong end of the sausage making process if we only look at and rate results?  If I go to the store and buy sausage, I don’t randomly buy 5 or 6 or even 10, take them home, throw them on the grill with some ribs and corn and then taste them all to decide which one I like.

First, I decide if I want creole sausage, hot links, beer brats, andouille (yummy) Jimmy Dean Pork Sausage, bratwurst, knockwurst, etc. Then I find the type I am looking for and look at the labels to check ingredients. Finally, I decide on a price point for the ones I am interested in. THEN I test the 2 or 3 that are left.

Shouldn’t we do the same with AI? Because let’s remember here folks that no two algorithms bring the same results.  So, no two AI processes bring the same results or documents to the front of the line. Saying I have reliable results from ACME AI means well, I have reliable results from ACME AI. It tells me nothing about the results from RoadRunner AI.

In fact, it’s almost a guarantee that the results will be different. I’m not saying the accuracy or reliability will be different, I’m saying the actual results, the documents found, will be different. So how can I compare reliability scores if what they are saying is reliable is different from program to program?

How do I know the results will not be the same? Because a law librarian told me so. Growing up in Vermont, I remember many a long walk up and down the snow-covered hill where we lived to the old Carnegie funded library, a massive brownstone building. One of many such libraries I visited growing up in New England.

I came to believe in the giddy days of my yute that librarians were the font of all wisdom. And as Gayle never tired of reminding me, “when you absolutely, positively need the answer overnight, ask a librarian.” So when I saw an article by Bob Ambrogi about a study by law librarians showing that different legal research platforms deliver surprisingly different results, I paid attention.

Bob reported on a draft research paper entitled, The Algorithm as a Human Artifact: Implications for Legal {Re} Search. Some of the findings were presented by the author, Susan Nevelow Mart, director of the law library and associate professor at the University of Colorado Law School, in a program at the annual meeting of the American Association of Law Libraries. In 2017.

Bob noted that:

“Mart’s exploration of the differences among research services was spurred in part by an email she received from Mike Dahn, senior vice president for Westlaw product management at Thomson Reuters, in which he noted that “all of our algorithms are created by humans.” Why is that statement significant? Because if search algorithms are built by humans,  then those humans made choices about how the algorithm would work. And those choices, Mart says, become the biases and assumptions that get coded into each system and that have implications for the results they deliver.”

“Significant” is the right word. There was hardly any overlap in the cases that appear in the top 10 results returned by each database as only about 7 percent of the cases were returned in search results in all six databases. Likewise, relevancy differed widely in the search engines with the percentages of relevant cases delivered ranging from a high of 67% to a low of 40%.

And given that figure, how many of those unique cases were relevant? The best figure was one vendor that returned 33.1 percent of its cases both unique and relevant, while the lowest was just 8.2 percent of unique cases also found to be relevant.

Mart’s conclusion?  “Legal research has always been an endeavor that required redundancy in searching; one resource does not usually provide a full answer, just as one search will not provide every necessary result. This study clearly demonstrates that the need for redundancy in searches and resources has not faded with the rise of the algorithm.”

The answer, Mart believes, is that legal research companies need to be much more transparent about the biases in their algorithms. They need to employ what she called “algorithmic accountability”.  I’d suggest the same is true for AI.

Given that caution, here’s another recent article on a way to look at validating AI by tying it first to the values of the underlying business model. It’s called Defining “Value” – the Key to AI Success and you can find it at Of course it’s from a librarian. In this case, Jeff Brandt, the  CIO at Jackson Kelly and editor of Pinhawk Law Technology Daily Digest.

Now the article does define AI success in a manner that may not fit the eDiscovery workflow but the point is that it asks the user to think like a data scientist. The value proposition is that “… If we want AI to work for us humans (versus us humans working for AI), then we must thoroughly define “value” before we start building our AI ML models.”

Now unfortunately, thinking like a data scientist means using this chart.

But that’s OK, because to help the process, the NIST decided to coin an acronym for AI user trust characteristics in their model called Perceived System Trustworthiness. That’s right. PST, a phrase we already use in legal data management.

And if that’s not confusing enough, then consider the following formula they devised for determining (their) PST:

Well sure. I mean who doesn’t know that?

OK, maybe thinking like a data scientist isn’t the way to go. Maybe we should start by thinking like people who don’t speak math as their first language. But the point is we do need to examine how we look at AI and I’d suggest that the best place to start is at the front end.

This isn’t Star Trek nor should we just “… believe the hype.” We have an ethical duty to truly understand this technology in order to be able to explain it to our clients and the Court, when required. I once wrote that AI should really stand for Attorney Intelligence (  )

Bob Ambrogi may have put it best in the article I mentioned at the top, “We can push companies to be more transparent about their algorithms, but in the meantime, we should remember that time-worn piece of advice: consider the source.”


March 22, 2021

I always like to wait a few days before I review a big tech conference to let the impact settle in. And not just for me but for attendees and exhibitors as well. That said, here’s my thoughts on the ABA Techshow2021.

This year was the first ever virtual TechShow and as such defied most descriptions I could think of using. When I wrote about the 2020 TechShow I called it Woodstock for Techies but had originally thought of calling the column “Why the ABA TechShow is Like Mardi Gras” and that may be even more appropriate this year since neither one of these landmark events occurred in person. TechShow for the first time ever, Mardi Gras for only the 15th, the other times being during times of war, civil unrest or, strikingly,in 1879 due to a massive outbreak of Yellow Fever.

Mardi Gras was a wash due to cancelled parades and a terribly cold snap but the online version of TechShow was a resounding success.  On the first day, for the fifth consecutive year, legal news reporter Bob Ambrogi hosted a startup competition he first brought to the table that showcases 15 finalists facing off to give a 3-minute pitch about their legal startup. Brilliant.

Then, 70 sessions in 5 days. Wow!

Multiple tracks. And when I say multiple, I mean multiplicity. Collaboration, Ethics, Diversity, Litigation, Core Concepts, Law Office Management, Cybersecurity, Disruptive Innovation, Well Being, Virtual Remote, Marketing, Future Proof, Automation, Technology, The Next 20, Lessons Learned, Business Plan Bootcamp.

And sprinkled in among all these sessions were TechTalks and micro demos from various vendors and meeting rooms. You literally couldnt tell the players without a scorecard.

In a post-show chat I had with Co-Chairs Roberta Tepper and Allan Mackenzie, I was told that the high number of sessions was a conscious decision to increase the value of the show. Given the fact that they did not have to worry about people physically moving from room to room between sessions, it was easy to schedule more content. And to counter any problems with “Zoom fatigue” or attention span, some sessions were reduced to 45 minutes and some micro demos were dropped to 30 minutes. In addition, “non techie” sessions were a distinct focus due to Covid-19 and people feeling isolated in their various quarantine regulations.

The result was well received.  Not only were the well-being sessions well attended but core concept tracks also showed high numbers. And all the practice management sessions had over 100 attendees.

55 vendors supported the conference. Several long time exhibitors did opt out of the virtual experience but those who stayed were pleased, especially companies like MyCase ( see theri review at and CenterBase ( which chose to do micro demos, both of which commented on their positive experience.

Specifically, Bill Gallivan, CEO of Digital WarRoom, said that “The virtual software worked fine – it was especially good at meeting people based on how they tagged themselves (i.e. Litigator, Dispute Resolution, Solo Firm, etc.)

The technology was also quite stable, no small feat when you are dealing with hundreds of simultaneous users logging in, collaborating, and downloading material. Kudos to the ABA tech team, especially Lindsey Kent and Josh Gaton.

All in all, the experience was very good. Final attendance numbers were not in at the time I wrote this, but Roberta and Allan did tell me that the number of paying attendees had exceeded last year’s live show. The fact that the 2020 event gave away a high number of free exhibit hall passes may lead to a lower overall count in the end, but the fact remains the paid attendance was up in a totally new environment.

But to me the most important factor every year is the people themselves. The excitement, the camaraderie, the sense of finding something new and exciting can’t be replicated in a virtual show but still the collegial approach that involved not just listening to speakers but talking to each other in meet up rooms and engaging in zoom sessions with vendor personnel in many of the virtual booths was darn close.  

So Kudos to conference co-chairs Roberta and Allan for putting together a great show with great attendees and speakers, all helping each other to move the technology bar forward.  I found the level of participation, enthusiasm and cooperation extremely high, something that of course we’ve talked about an awful lot in the eDiscovery world and found lacking for several years now. The Techshow emphasis on solo and small firm attorneys at this show bodes well for the future of conferences focusing on legal technology.

What is that future? Bill Gallivan told me that a totally virtual show “… is probably not the best model for the ABA going forward since vendor payments will decline but it may be a better way to deliver content and information from ABA subscribers since there are no travel costs and content can be semi-pre-recorded for better delivery.”

And Don Swanson, long time participant in legal technology consulting and president of Five Star Legal, noted that he thought smaller groups will be focus of the future for live events, perhaps even put on by law firms or vendors.

Either way, hopefully we will see you next year live and in person at the Hyatt.  And of course, the Billy Goat Tavern! Cheezborga, Cheezborga, Cheezborga.

Goodbye to ILTA>ON and Again to Browning Marean

August 28, 2020

It’s Friday Aug. 28 and the fifth and final day of ILTA>ON is coming to a close. It was a wonderful event and, as always, featured outstanding educational content and robust, albeit virtual, social interaction.

In 2014 at this time I was at ILTA and sending emails to my dear friend Browning Marean who was hospitalized in Texas fighting cancer. Earlier in the week I had gathered a number of his friends and colleagues for a group photo which we sent him and posted on social media. I also took a photo of him, had it signed by everyone I could find and sent to him via FedEx.

Two days later Gayle and I were back in New Orleans, driving to the French Quarter for lunch when we received the phone call saying Browning had left us. Gayle burst into tears and I had to pull over to compose myself and then started crying as well.

I miss him just as much today as that day. The great Boston Celtics basketball player Bill Russell once said about Larry Bird, “He’s a better person than a basketball player.” Browning was a better friend than a colleague.

Sometime after he passed, I received the picture back in the mail. I never learned who sent it but I keep it on my desk, look at it constantly during each day as I work and often ask myself, “what would Browning do?”

Write if you get work old friend.

ILTA>ON Wraps Up Today

August 28, 2020

Today is the last day of ILTA>ON and the high caliber educational sessions don’t let up a bit. The keynote address today is Legal’s Next Disruptor? Demystifying the Big 4 and the discussion about the changes caused by corporate legal departments working in new ways to handle eDiscovery, including the use of external resources beyond their outside counsel.

One of the speakers is Peter Krakaur, Managing Director of EY Law. I’ve known Peter since his days at FindLaw and as a leader in the KM movement thru engagements at Brobeck, Heller Ehrman & Orrick. The term “thought leader” is overused in our profession but Peter genuinely fits that description and I highly recommend this session for a glimpse into the future of legal services worldwide.

Today’s tracks include two on Creating The Future Together as well as Business & Legal Process Improvements and, as always Doug Austin provides a fine overview of the days activities in his post at

And don’t forget to visit the vendors in the Solutions Center at They will be available with Zoom chat sessions until 5PM today.

Finally a big word of thanks to Joy Heath Rush and the ILTA staff for producing another excellent educational conference and a special thanks to Beth Anne Stuebe and all her fine staff for the assistance all week in the Press Room, which made our transition into a virtual environment so much easier!

Cya next year in Vegas. Or here. Or maybe both!! Be safe everyone.

Dont Forget the Vendors At ILTA>ON

August 27, 2020

Day 4 and the sessions are continuing with their high quality and levels of attendance. The Litigation Roundtable yesterday had over 150 people and broke out into smaller groups for focused chats ….something you can’t do in a large room at a live conference. And the first time Marketing track had over 120 people in both of the sessions I attended.

Todays keynote speaker is Haley Altman, Global Director of Business Development & Strategy at Litera Microsystems, an ILTAMax Sponsor of the conference, talking about Bold Moves and Big Opportunities in our current environment. Featured tracks today are  Tech Adoption/Artificial Intelligence, Applications and Finance (AM)/Info Governance (PM) and, as always, Doug Austin of eDiscovery Today, provides a good overview of the sessions at

Finally, if you’ve attended other virtual conferences this year, you may have seen a virtual exhibitor hall and are wondering where that is in ILTA>ON. Look for the Solutions Center tab on the main page and click there. You’ll find nearly 100 vendor sponsors and if you click on any one of them you’ll find a video about the company, a link to various resources, some company info and some even have a button to click for a demo.

But all of them have a Zoom chat feature that is available from 8AM to 5PM Central every day. That’s right, you can chat live with a company rep if you have specific questions, need a follow up or just want to say hi to long time ILTA supporters like Dan Berlin at TABS 3, Helle Schwartz-Grossman at WorldDox, Paul Unger at Affinity or even the fine folks at ACEDS. So find that Solutions Center and take a virtual stroll thru the exhibit hall!


August 26, 2020

It’s Wednesday and day three of ILTA>ON. Todays keynote speaker is Richard Punt, Managing Director of Legal Strategy & Market Development at Thomson Reuters and his topic is AFTER THE QUAKE: PREDICTIONS FOR AN UNCERTAIN LEGAL FUTURE.

The other tracks for today include Litigation Support, IT Operations, Office 365 and Marketing. As always, Doug Austin of eDiscovery today gives a good synopsis of the days events which you can see on his blog at and Doug himself will be speaking on a session about clawbacks and redactions at 12:45 PM CST which is moderated by ILTA stalwart Cindy MacBean of Honigman LLP.

As always, day passes are available at and I’ll be reporting in during the day as I attend various sessions.

ILTA>ON Is Officially ON

August 25, 2020

ILTA>ON is underway. It began yesterday with some great sessions including my personal favorite, a sneak peek at the forthcoming ILTA technology survey. For a great recap of that session see the summary by David Horrigan Discovery Counsel & Legal Education Director of Relativity at David also had a great interview later in the day with Doug Austin of eDiscovery Today which you can see at

Attendance is high with over 2395 attendees reported on line Monday and over 90 vendors in the Solutions Center, all featuring live Zoom chat capability form 8AM to 5PM every day. Great feature!!

The keynote today is Ted Talks superstar Jia JIang


and for a list of all the other sessions see the great daily overview form Doug Austin at

Daily passes are available at ($99 for members, $199 for non-members) so don’t be afraid to zero in on the specific sessions you’d like to see.

I’ll be back later today with more feedback …..

New Relativity One UI Makes It Extremely Easy for Users to Get to Work Right Away.

June 22, 2020


Relativity held their annual Relativity Fest London event virtually in May this year and the keynote speaker, Relativity chief product officer Chris Brown, spoke about both their recently announced pay as you go pricing model and the new, currently under soft release, UI for RelOne called Aero.

RelOne has been around for four years and while changes to the interface have been going on for about 3 years, the Advanced Access Group came into play in early to mid-April and began working with this completely new UI. The group consists of 2 channel partners, two corporations, and two law firms, all of which have been instrumental in guiding the development of the UI with their enhanced feedback.

Relativity has been saying that Aero is more than just a fresh coat of paint and current users are being quoted as saying the new “ease of use and simplicity” is “… already having an impact.”

All this discussion of course piqued my interest, so I cast around, watched several of their webcasts and was finally able to arrange a personalized demo firsthand. Aero won’t be officially released until September, but it is commercially available now through providers in the Aero Advance Access program. Here’s what it looks like.

Overall, the 3 main goals of Aero set out by Relativity are:

Intuitive Workflow

Designed to get you to what you need faster, RelativityOne delivers an intuitive and streamlined platform, reducing unnecessary clicks and decisions so you have exactly what you need to accomplish your work.

Light-Speed Performance

Aero delivers what you need fast. Whether you’re flying doc-to-doc, running batch operations, or moving across the platform, everything is available when and where you need it.  Documents with large page counts load much faster now rendering on a page by page basis rather than waiting for the entire doc to render.

Easy Navigation

With logical workflows, step-by-step navigation, and simplified processes you can move through the platform without thinking about where you go next.  The modernized aesthetics have removed ~70k clicks and has minimized cursor travel to increase efficiency.

My specific impressions of the feature set are:

  1. First major change that you will see is that the tabs on the top now become categories on the left
  2. There are no default categories yet but there eventually be some based on a user profile or case defaults
  3. Document previews show in a viewer window which is a view only mode, but you can click on the DocID to bring up the full document and perform coding
  4. The full doc viewer has the complete doc listing on the left and you can jump to any document
  5. You can also pop up document history or image thumbnails as you scroll
  6. The dashboard is collapsible
  7. Ability to save searches as well as the long overdue ability for searching over mass searches feature and a mass copy/move/delete feature
  8. Filtering is available by person or by date
  9. Search enhancements include:
    • Searching for emojis or emoticons
    • Persistent highlights
    • Search for ASCII symbols
    • Highlight one term and focus search
    • Find conceptually similar in a paragraph
    • Display zero hits
  10. Direct loading of documents
    • Can drag and drop up to 100 “loose documents”
    • With large files, can look at pages that have loaded while the remainder of the loading continues.  Large docs are now in essence rendered on a page by page basis
  11. Adjust extracted text size in a manner that is similar to resizing columns in Excel
  12. Hardware agnostic
  13. Browser agnostic
  14. May have some version requirements especially with regards to the working version of Windows
  15. Field creation can occur on the fly
  16. Automatic workflows including:
    • Automated DT search updating as data is loaded
    • Analytics
    • Privilege lists
    • These will require setting a rule simultaneous to loading
  17. Predictive coding
  18. Azure
    • Hosting
    • Invariant processing

A general release was originally planned for September although it remains to be seen if the COVID-19 pandemic has any effect on that. As the graphic below shows, however, Aero is available now. Pricing is a currently said to be a flat subscription fee plus a user charge or pay as you go based on usage.

If you’d like to chat more about Aero or arrange for a demo the way I did, just contact me at  

O365 eDiscovery Search Part 2 with Rachi Messing and Tom O’Connor

May 29, 2019

In our previous installment on Content Search we discussed basic searching and how to work with the results. This session covers some of the deeper filtering functionality that can be performed in a Review Set along with advanced search techniques and basic ECA functionality with those techniques. In addition, Rachi mentions an exciting development regarding the new O365 ability to download data directly from Facebook. in the main workloads.

Louisiana Misses The Mark On Technical Competence? Not So Fast My Friend!

May 28, 2019

Corso 1

My adopted home state of Louisiana has come under a lot of fire recently for not being up to speed on the duty of technical competence. Bob Ambrogi rather pointedly called his blog post on the subject,  A Tech Ethics Opinion that Misses the Mark and in that column referred to another post by Nicole Black who had previously written about the same subject in a Legal News column.  I know and respect both Bob and Nicole but I’m going to go all NCAA College Game Day on them here and channel my inner Lee Corso.

Basically I think their comments miss the mark on several points. First, Bob states that

“ ….  the ABA’s first opinion to address Model Rule 1.1, Comment 8 — Formal Opinion 477 issued in 2017 — makes the point repeatedly that the duty of technology competence encompasses the ability to understand how the client uses technology, what technology systems the client uses, and the client’s degree of technology sophistication.”

So first let’s recall that ABA Model Rule 1.1, Comment 8 was passed in August of 2012 and merely says  “ … a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” As I noted at the time, it was horribly vague and it languished, rightfully so,  for several years before states started incorporating some form of it into their Rules of Professional Conduct. Taken by itself it doesn’t mention anything repeatedly.

ABA Formal Opinion 477 was issued, as Bob notes,  on May 1, 2017. This was well after the state technical competence bandwagon had started rolling but it dealt specifically with Securing Communication of Protected Client Information, the actual title of the Opinion. It does talk about technical competence quite a bit but, in my opinion, it is within the framework of protecting confidential client information, a specific technical discussion and not that of an overall duty of technical competence.

Next, both Bob and Nicole take the position that in today’s world, technical competence is a given.  Their comments seem to echo this line:

For example, a lawyer would have difficulty providing competent legal services in today’s environment without knowing how to use email or create an electronic document. 

But that quote isn’t from the Model Rules either. It’s from the ABA Commission on Ethics 20/20 Report issued in August of 2012. And in that report, Comment 8 of Formal Opinion 477 notes that the Commission said, in commenting on the proposed change to Model Rule 1.1:

The 20/20 Commission also noted that modification of Comment [6] did not change the lawyer’s substantive duty of competence: “Comment [6] already encompasses an obligation to remain aware of changes in technology that affect law practice, but the Commission concluded that making this explicit, by addition of the phrase ‘including the benefits and risks associated with relevant technology,’ would offer greater clarity in this area and emphasize the importance of technology to modern law practice. The proposed amendment, which appears in a Comment, does not impose any new obligations on lawyers. Rather, the amendment is intended to serve as a reminder to lawyers that they should remain aware of technology, including the benefits and risks associated with it, as part of a lawyer’s general ethical duty to remain competent.”  (my emphasis added)

Finally, Bob, much like everyone discussing this topic, quotes the The State Bar of California’s Formal Opinion No. 2015-193 with it’s “6 things ever lawyer needs to know about technology” emphasis. I hasten to point out that the California opinion was written in August of  2016 and specifically refers ONLY to eDiscovery matters.

Further, the last para of the opinion specifically states:

This opinion is issued by the Standing Committee on Professional Responsibility and Conduct of the State Bar of California. It is advisory only. It is not binding upon the courts, the State Bar of California, its Board of Trustees, any persons or tribunals charged with regulatory responsibilities, or any member of the State Bar

It is not a discussions of a general duty of technical competence and I am not aware that the California Bar has either offered an opinion regarding such a duty or offered a competency rule change on such. Nor has the California Supreme Court or Legislature amended their rules to reflect such a change.

So Louisiana amended its Code of Professionalism (not ts Rules of Professional Conduct) to reflect a duty of tech competence and in one para seemed to follow Model 1.1 by saying:

“I will stay informed about changes in the law, communication, and technology which affect the practice of law.”

Remember that ABA Model Rule 1/1 says

“ … a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”

So, we left out the “benefits and risk” clause. That is a major departure form the tone of the ABA Commission on Ethics noted in their 20/20 Report comment. Really?

Sorry but I don’t think so. Long before this Code change, whenever I asked LSBA people   ( and I speak at LSBA CLE events quite a bit … I’m talking to folks at the LSBA all the time) about the duty of tech competence I was basically told, “well we feel the duty of competence INCLUDES tech competence and that’s what we tell people who ask.”  Exactly as the ABA Commission On Ethics 20/20 Report stated. It’s a reminder of an  already existing duty.

Finally, Bob generally and Nicole specifically interpret the Louisiana wording as implying a choice of whether or not to use technology. I personally think that’s splitting hairs in a way the Code doesn’t intend.  Perhaps a better word would have been “when” not “if”,  but still, do we seriously think anyone is NOT using technology? I mean they could choose not to use a phone either but I’m guessing their work would diminish. Rapidly.

Honestly, you think the Louisiana Bar is telling people they don’t have to use technology? Why not ask all the states who haven’t implemented the recommendations of Comment 18 to ABA Model Rule 1.6 why they aren’t taking ″reasonable efforts″ to protect against the inadvertent or unauthorized disclosure of, or access to, client information? Or ask California why they don’t have any opinion at all.  Or ask all the states who passed these nice vague rules if they have passed a subsequent CLE requirement for gaining that tech competence? I think we all know that the answer to that question is two.

But Louisiana has missed the mark? Come on man!!

%d bloggers like this: