A lawyer used ChatGPT to defend a client, but “didn’t know” that AI could cheat
- May 29, 2023
- 0
AI has no mind or consciousness of its own and therefore cannot really understand the gist of what is written and find fault. The algorithm works on the
AI has no mind or consciousness of its own and therefore cannot really understand the gist of what is written and find fault. The algorithm works on the
AI has no mind or consciousness of its own and therefore cannot really understand the gist of what is written and find fault. The algorithm works on the principles of compiling and predicting the probability of a particular word appearing in a sentence. By training on the wrong data, you won’t get anything reliable as a result.
Levidow, Levidow & Oberman attorney Steven Schwartz defended the interests of injured passenger Roberto Mata on the flight of the Latin American airline Avianca: he injured his knee when he crashed into a metal car.
Schwartz produced a ten-page report in which he cited earlier alleged rulings in about ten similar cases. However, a problem arose when examining their data: Neither the judge nor Avianca’s lawyers were able to find the findings mentioned in the report in court databases..
The lawyer later confessed under oath. used the OpenAI chatbot to prepare his speech. To verify the accuracy of their answers, the lawyer only asked ChatGPT if he was lying. The neural network said that all the cases given in the answer are true.
Screenshot of dialog with ChatGPT / Photo from case file
According to Schwartz, he “didn’t know” that the chatbot could lie. He now “deeply regrets using generative AI to support legal research.”
The lawyer who presented false facts in court now risks losing his license.
Source: 24 Tv
John Wilkes is a seasoned journalist and author at Div Bracket. He specializes in covering trending news across a wide range of topics, from politics to entertainment and everything in between.