Categories
Legal ethics

Use the right tool for the job.

If you need a very short version of everything I am about to write, it would go a little something like this.

Don’t use a calculator to try to determine whether you have spelled a word correctly. If you do that, don’t blame the calculator because you are the problem.

Even though it all transpired over a holiday weekend here in the United States, you likely all have read one piece or another about the lawyer in New York who is now in very, very hot water over using ChatGPT to perform legal research and/or write a brief. The initial result for the lawyer was that ChatGPT provided him with six or so case citations that supported his arguments but that were fabrications. Made up citations for made up cases that never actually occurred. When called out by the Court, the lawyer made his situation vastly worse by apparently going back to ChatGPT and asking it to confirm whether the cases it provided actually existed and apparently obtaining copies of the opinions from ChatGPT.

Those opinions, of course, were also fabricated entirely by ChatGPT. The lawyer now faces a show cause hearing on June 8 over his conduct and has come forward with an affidavit attempting to explain what he did and making sure the court knows that the fault, as between him and his co-counsel, is entirely his.

Now, that affidavit does still try to lay the blame for the whole situation on AI, i.e., that ChatGPT was really at fault.

You can read the full affidavit here:

Now, depending on who you look to (other than yours truly) for your written commentary on fun legal and ethical developments, you may have run into folks trying to concur that AI is to blame, or who are using this incident as support for their preconceived notion that lawyers should not use AI for much of anything at all, or who are trying to say that the lawyer’s problem here was basic competence and not a deficiency in the more narrow concept of “technological competence.”

Of course, I know there are others out there that have appropriately diagnosed the problem, but I can’t help but offer my two cents.

This episode is not an indication that lawyers should not use ChatGPT or other AI in their practice. Not at all. This is absolutely a tech competence problem at heart. You can only use it for things that it is capable of doing correctly. Calculators are great for math. They won’t help you check your spelling.

Also, this might very well be a lawyer honesty problem, but we won’t know that last part for sure until the court gets through with its show cause hearing.

Shortly after the ABA revised the Model Rules to include language in a Comment making clear that lawyers had a duty of technological competence. I had several opportunities to speak about what exactly that duty likely entailed. I quickly got to the canned statement of … if a lawyer uses a technology, they are going to have to know how it works.

This lawyer failed that duty of technological competence by using ChatGPT for legal research if that is what he did. Any lawyer who knows the basics about how ChatGPT works would know that you cannot use it to perform legal research because “making up cases” is a known issue. If what the lawyer initially actually did was have ChatGPT draft his brief for him, then it is much closer to the line whether his initial failing was a lack of technological competence or a lack of basis competence.

But when he then went back … after prompting from the court that the cited cases did not appear to exist … and asked ChatGPT to check its own work, that is a clear problem of technological incompetence.

And while this whole folly is not an indication that lawyers should be unwilling to use AI products for the things that particular AI products are designed to be able to do … it is also a pretty decent opportunity to suggest that the people who are hell bent on pushing AI as a solution for everything and scaremongering that AI will replace the work of all human beings should maybe take the rhetoric down a notch.

Why? One somewhat plausible explanation for how this lawyer could somehow have actually thought it made sense to keep going back to ChatGPT to get explanations for the problems it had created is that he may well have bought into the hype and rhetoric that AI will actually replace lawyers.

That explanation might be plausible but I’m going to bet that it is not the probable explanation. The probable explanation (to me) is that often times, lawyers who make bad choices, keep making bad choices when they try to undo the damage they’ve done to themselves.

5 replies on “Use the right tool for the job.”

Only lawyers … what? Not sure I understand the context or focus of the question?

My colleague, Brian, makes good points about ChatGPT. But he is way too nice in his description and analysis of this fiasco. It was lawyer incompetence or lawyer laziness, period.

Very interesting. I wonder where the line will be drawn with regard to how well a lawyer must know a products innerworkings. When conducting case research on LexisNexis, I do not know the algorithm that produces the results but “trust” the system. Just interesting to think about how “competent” we must be in understanding the technology. Thanks for the article!

Comments are closed.