Stop me if you are shocked to hear it, but the legal profession won’t stop trying to tout AI and GAI in particular as something that absolutely has to be enmeshed into all aspects of a lawyer’s practice.
But here’s the thing, there continues to be a lot of things that GAI simply isn’t useful for. Legal research being pretty much at the top of that list. And, here’s the other thing, as to the things it is sort of useful for, it is getting worse.
Now, I am not a Luddite. And I continue to actually speak at CLE events on the legal ethics issues associated with GAI. I do that because, as mentioned later in this post, it is useful for some tasks and because the horse is so far out of the barn that I don’t think it will ever be corralled and put back.
But it has now been more than two years since the Avianca debacle happened and it seems like monthly, if not weekly, there is another example of a lawyer getting sanctioned for filing things with the Court containing citations to hallucinated cases. There have been several recent occasions where I have been tempted to write about this topic in response to various examples of those ranging from very large law firms falling into the trap to smaller firms and solos continuing to do also do so, but none of those seemed like a moment to say anything really all that new.
But now there is the first (at least the first reported) instance of a court actually entering an order in a case that itself cited to hallucinated case law. You can check out the Georgia Court of Appeals opinion reversing the trial court’s ruling here:
Now, the source of the problem here again (in addition to GAI as a contributing source) is a lawyer in the case who included a fake citation in a proposed order, but, much like we are watching the failure of checks and balances generally in the United States at the moment, we see the risk to the system from assuming that courts can be counted on as a check and balance to ferret out lawyers or other litigants injecting fake precedent into the adjudication of cases. Perhaps the most entertaining (but not in a positive way) aspect of the story of the lawsuit is that after the lawyer was called out for the fake citations finding their way into the court’s ruling, the lawyer responded by offering many more fake citations (11!) to support the position in his briefing to the appellate court.
I’ve written in the past about what was then my sneaking suspicion that GAI really isn’t ever going to be a worthwhile tool for performing legal research. Those seemingly intrinsic flaws in GAI’s ability to understand text as anything other than text of equal importance aren’t the cause of hallucinating entirely nonexistent cases like the above-mentioned situations. But I am also seeing instances of people frequently referencing cases that actually do exist but with claims that those cases say things that are nowhere in the opinion itself or that they stand for propositions that when read they clearly do not support. In many instances, the only reasonable expectation is that the lawyer (or pro se litigant) is relying upon what an AI product has told them the case says.
I have seen this in instances where I am exposed to litigation other people are involved in. I have even had clients who did not like the advice I was providing them argue with me by sending me legal authorities that undercut my analysis only to have to then explain to them that going and reading the case shows that nothing in the alleged summary is actually there.
Here’s an anecdotal example of another growing area of GAI hurting more than helping, the email service I use for my personal email account insists on forcing AI summaries of emails I receive into the top of the email itself (often the only thing you see when the summary is provided as part of the presentation of your email inbox) and those are incorrect almost half of the time in a way that is problematic if I were ever actually willing to try to rely upon the summary.
And then there is the unknown amount of detrimental impact that is going to come from players in the marketplace such as Elon Musk and his xAI company manipulating how its GAI product “Grok” works because he doesn’t like the answers it provides based on its original training data set. Twitter is a toxic cesspool where no one should go, but, if you do, you can find in his account where he launched an effort to “retrain” his GAI product based on input from Twitter users. You can also read about his endeavor without going into that cesspool here. Apparently as a result of this, it’s now in just the last week started occasionally writing in a voice as if it were Musk himself answering questions and offering even more anti-Semitic responses than one might already expect from a Musk endeavor.
And the thing is, the content created and disseminated by Grok is going to get added into the data on which other AI products are trained and thus the misinformation is simply going to spread further. Because it is likely to work a bit like a toxic feedback loop. The influence of inaccurate GAI output being fed to other GAI as training data seems to be making AI products less useful over time for using it for just about anything other than idea generation. So, it is quite possible that the best days of AI/GAI as a useful product are actually already in the past.
When you carry this point a bit further intellectually, when you read the Georgia Court of Appeals opinion above and see that the court lists the citations that were fabricated toward the end of the opinion, you can pretty readily guess what is going to happen at some point in the near future, right? A GAI tool that can’t truly read and analyze case law is going to reintroduce those fake cases and bring them to some user’s attention by signifying that they were somehow cited approvingly by that Georgia Court of Appeals opinion.
And then there are things like this story where Law360 reporters are being required to use an AI tool on their articles to detect and prevent “bias.” As this article lays out, unsurprisingly given that the primary source of claims of media bias for the last twenty years (and on which any such AI tool would be trained) have come from right-wing actors claiming that accurate reporting of things done by Republican lawmakers, the kinds of things this AI “tool” was requiring of reporters were things like the following:
On June 12, a federal judge ruled that the Trump administration’s decision to deploy the National Guard in Los Angeles in response to anti-ICE protests was illegal. Law360 reporters were on the breaking story, publishing a news article just hours after the ruling (which has since been appealed). Under Law360’s new mandate though, the story first had to pass through the bias indicator.
Several sentences in the story were flagged as biased, including this one: “It’s the first time in 60 years that a president has mobilized a state’s National Guard without receiving a request to do so from the state’s governor.” According to the bias indicator, this sentence is “framing the action as unprecedented in a way that might subtly critique the administration.” It was best to give more context to “balance the tone.”
Another line was flagged for suggesting Judge Charles Breyer had “pushed back” against the federal government in his ruling, an opinion which had called the president’s deployment of the National Guard the act of “a monarchist.” Rather than “pushed back,” the bias indicator suggested a milder word, like “disagreed.”
Also, this particular bias tool suffers from what seems like a variation on the intrinsic GAI problem discussed above that seems to thwart the technology’s ability to actually understand court opinions and legal precedent:
Often the bias indicator suggests softening critical statements and tries to flatten language that describes real world conflict or debates. One of the most common problems is a failure to differentiate between quotes and straight news copy. It frequently flags statements from experts as biased and treats quotes as evidence of partiality.
And the required bias tool for that media outlet has come about because it is being pushed by Lexis/Nexis, one of the companies pushing hardest to force GAI products onto lawyers no matter whether the products are all that beneficial or not.
As to idea generation, it is still very hard to beat in terms of efficiency. You want something to make an instantaneous first draft of a letter for you? Fine. You want something to reword an email for you to make it seem more professional or less adversarial? Fine. You want something to generate an outline for deposition questions on some topic you haven’t dealt with before or recently? Fine. In all those situations, and many more, it’s pretty good at giving you a starting document that you can edit and revise to make your own.
Meanwhile, these products that really might only be a valuable tool for extremely limited purposes – purposes that largely do not justify using anything other than free products – continue to consume valuable resources at an advancing pace and seems destined to help force global temperatures higher. It would be nice if someone with true influence could make all this stop and rollback so we could have a reasoned discussion about the pros and cons, but that seems an unlikely ask at this stage of the cycle.
(I know there are a couple of other subjects where I likely owe people some updates (or at least where I have heard from some people curious about the latest developments), but those don’t really merit their own post so I’ll put those updates in the comments to this post.)
4 replies on “Gee, AI isn’t getting any better.”
Promised update #1: The Tennessee Bar Association has not deviated from its unforgivable course of silence in the face of unprecedented attacks on lawyers and the rule of law. I did, in fact, end my membership with the TBA after more than two decades. In turn, I have actually now joined the Knoxville Bar Association as a small way of showing my support for the fact that its President has been willing to speak out.
Promised update #2: The Florida Bar, when last I checked, has not even acknowledged receiving a referral of my Bondi complaint from the Tennessee BPR. It also appears pretty clearly that the BPR has no intention of making the case that it has jurisdiction for the press conference remarks aimed at prejudicing the Abrego Garcia proceedings.
Good article, Brian. I have graduated to full Luddite status, pushed in large part by the unceasing propaganda telling us that AI will “change everything” and that if we don’t adopt it we will “be left behind.” As we march into this brave new world, getting left behind seems like an attractive option. I understand that AI cannot be fully avoided but we still have a choice to use some AI tools or not use them. My decision is to avoid AI as much as possible.
Thank you.