Don’t trust ChatGPT too much, especially at work
- May 28, 2023
- 0
Those who read me regularly know that in general I have a pretty positive opinion of ChatGPT, the OpenAI chatbot, which launched the integration of generative artificial intelligence
Those who read me regularly know that in general I have a pretty positive opinion of ChatGPT, the OpenAI chatbot, which launched the integration of generative artificial intelligence
Those who read me regularly know that in general I have a pretty positive opinion of ChatGPT, the OpenAI chatbot, which launched the integration of generative artificial intelligence models into all kinds of services at the speed of light. If it weren’t for its launch late last year, surely the new Bing, Google Bard, “copilots” and other features would still be in their infancy.
Now that you have a positive opinion about the services it does not mean that you are not aware of your problems and limitations. What’s more, when I first wrote about ChatGPT in MuyComputer, it was precisely to focus on these issues, as you can see here. In that case, I mentioned three types of causes of errors in chatbot responses and, as I indicated at the time, I reserved the most troubling one for the end.
I’m talking, of course, about hallucinations, a problem that is difficult to completely solve and that still haunts us today check the answers provided by ChatGPT with some external and reliable source to confirm that we are not facing an algorithm that has decided to let its imagination run wild. Otherwise, if we trust any answer completely, and especially if we do it in a work context, we can expose ourselves to problems and very embarrassing situations.
He experienced such a circumstance Steven A. Schwartz, the New York lawyer who accidentally used fake data created by ChatGPT in a lawsuit. Specifically, it involved a lawsuit in which he defended the interests of a private plaintiff against Aviance for an incident that occurred on a flight between El Salvador and New York. During it, Schwartz’s client was accidentally hit in the leg by a trolley used by flight attendants to deliver food, drinks, offer on-board business, etc.
In his complaint, the lawyer cited several court precedents for similar causes, but as you can imagine, their source was none other than ChatGPT. And yes, you guessed it, the references to previous cases provided by the chatbot were either false or incorrect. So, now it is the lawyer who has to give explanations to the court for making false statements in his lawsuit, as we read in Reason, which also excludes some of the communications between the two parties.
It does not appear that Schwartz’s actions were intentional, as the slightest process of verifying the stated sentences would have produced the result that it actually was. So I think your statement that you relied too much on ChatGPT is honest and I’m sure you’ve learned your lesson so it won’t happen again. And it undoubtedly has a very beneficial function, which is to serve as a great reminder that a chatbot can’t be your only resource…if you’re not afraid of getting in trouble.
Source: Muy Computer
Donald Salinas is an experienced automobile journalist and writer for Div Bracket. He brings his readers the latest news and developments from the world of automobiles, offering a unique and knowledgeable perspective on the latest trends and innovations in the automotive industry.