Saturday, July 8, 2023

COMPLIANCE OFFICERS, STOCK TO AI-POWERED AML PLATFORMS; AVOID CHATGPT AND ITS PROGENY DUE TO FABRICATED AND INCORRECT INFORMATION

There's even more nasty civil litigation being filed against users of ChatGPT, who are alleging that they have received totally fabricated information, libelous information and even malicious data, when they have employed the generative AI Chatbot. Allegations that information "bearing no resemblance to the truth" has been obtained and acted upon, with dangerous results.

The most disturbing feedback is the statement that the product has been accused of returning misinformation and disinformation verging on "hallucinations," resulting in major defamation litigation. These are not isolated events, so compliance officers would be well advised to totally ignore the buzz over any similar AI product in both Customer Identification Programs, as well as Transaction Monitoring, when seeking information and data to supplement inquiries through their legacy systems.

It is prudent to limit your use of AI to those AML/CFT platforms that employ AI within a system designed for your use, and not to stray outside such systems. It is apparent that ChatGPT, and similar products, when asked for information, could create or manufacture an answer, leaving the world of facts behind, in its zeal to please the user with a response. Do you really want to find out, and too late to correct your mistake, that you relied upon fabricated data, resulting in litigation against you and your client?  Govern yourselves accordingly.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.