Thursday, February 13, 2025

YET ANOTHER COURT DECISION SLAMMING AI AS A LEGAL RESEARCH TOOL SHOULD CONVINCE YOU NOT TO USE IT FOR DUE DILIGENCE WITHOUT SERIOUS CONFIRMATION OF ALL ITS RESULTS

Yes, it happened again; a Federal Judge in US District Court in Wyoming entered an order captioned ORDER TO SHOW CAUSE WHY PLAINTIFFS’ ATTORNEYS SHOULD NOT BE SANCTIONED OR OTHER DISCIPLINARY ACTION SHOULD NOT ISSUE, in Case No. 2:23-CV-118-KHR. One more time, lawyers who should know better, after seeing previous instances where ChatGPT fabricated published court decisions, went ahead and used the AI program, then foolishly citing the non-existent law it produced to a judge. After checking the research, and seeing that it was completely fictional material, the District Judge, issued an Order to Show Cause why the lawyers should not face punitive sanctions. Page 2 of the order appears below. Don't those lawyers read the news?


I cannot stress this enough to compliance officers involved in due diligence research. I never rely upon AI platforms to tell me whether there is information upon my target, for the programs are still returning factional results, when they cannot find actual real data. All these nightmare scenarios regarding lawyers should tell you that AI is simply not bulletproof yet, and until it is, do not rely upon it. If you approve a new bank client, and it later turns out the positive information that you relied upon was a hallucination of the AI program you used, and the client was unsuitable, resulting in damage to the bank or its clients, you will be held accountable, which could be costly to both your professional reputation, as well as your pocketbook; Govern yourselves accordingly.

















No comments:

Post a Comment

Note: Only a member of this blog may post a comment.