Predictive Coding: Solomons New Baby?

Two recent events have brought a new focus on predictive coding and it’s use in electronic discovery matters. The first was an order from Judge Robert Miller Jr., of the United States District Court for the Northern District of Indiana on April 6th in the case of  In Re: Biomet M2a Magnum Hip Implant Products Liability Litigation (MDL 2391). That order has been referred to in numerous headlines as an endorsement of predictive coding. As an example Law Technology News asserted that “Indiana Federal Court OKs Jump-Start on Predictive Coding” and the ABA Journal went so far as to say that “Judge OKs Use of Predictive Coding to Cut E-Discovery Document Review Group from 2 Million to 5,000”.

But in fact the Judge did nothing like that. To the contrary, he went to great pains to point out at page 4 of his Order that “The issue before me today isn’t whether predictive coding is a better way of doing things than keyword searching prior to predictive coding. I must decide whether Biomet’s procedure satisfies its discovery obligations …”  And he found that it did.

Since that Order was published, however, several commentators  have pointed out that the BioMet procedure was perhaps not as accurate as the Judge may have thought, a fact that unfortunately was not well laid out in the Plaintiffs brief opposing the process that Biomet used.  Perhaps the most thoughtful critique of the general process employed is by well-known attorney and consultant William Speros whose guest column “Predictive Coding’s Erroneous Zones Are Emerging Junk Science”  appeared on Ralph Loseys eDiscovery Team blog.

And several other commentators noted that the specific statistical analysis in the BioMet order was not totally accurate and that a better analysis would show that only “only 35% of the relevant documents in the initial collection of 19.5 million were identified by this process.”   See In re: Biomet – Doing the Math on Court Approved Multimodal Review  

Those commentaries bring  us to the second event which focuses scrutiny on the predictive coding quagmire.  On April 19th, 60 invited delegates were convened by Duke University at a location in Washington, DC with the Federal Rules Committee to discuss Technology Assisted Review, a much better and more accurate phrase than predictive coding in my opinion. A full report can be found  here  where  eDiscoveryJournal Contributor Karl Schieneman notes that the group started with a definition of two primary forms of TAR: machine learning approaches and Rule based linguistic modeling.  But the machine learning group quickly split into two factions,  that of  “Random Samplers” and “Multi-Modulars.  The first group believes that random sampling is the only way to validate TAR results transparently by providing measurements on recall and precision for a specific project. The “Multi-Modulars,” suggest  that random sampling is ineffective due to the low number of responsive documents in large document sets and that a better approach is an iterative use of multiple tools to search through a collection.

What is most troubling to me about the report of this meeting is that it appears that both sides were harshly critical of the other. So if these experts can’t agree on what works best, how are laywers supposed to proceed with any sense of confidence? And worse,  if both sides in a dispute offer an expert from each camp to support their findings, how is a judge supposed to decide which is correct?

The discussion will continue again on Aug.15th when the proposed new rules for Civil Procedure will be released for public comment. You can expect the dialogue to become even more heated as the universe of commentators expands from 60 to everyone.

But perhaps the answer can be found in Judge Millers order where, at page 5, he stated the Steering Committee’s request that Biomet start their entire process over again “ …  sits uneasily with the proportionality standard in Rule 26(b)(2)(C).” As Mark Twain once stated “there are lies, damn lies and statistics”.  Judicial use of the proportionality standard may be the baby of Solomon in this ongoing dispute.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.