ChatGPT, an OpenAI-trained artificial intelligence chatbot, falsely accused prominent criminal defense attorney and law professor Jonathan Turley of sexual harassment.
The chatbot made up a Washington Post article about a law school trip to Alaska in which Turley was accused of making sexually provocative statements and attempting to touch a student, even though Turley had never been on such a trip.
Turley’s reputation took a major hit after these damaging claims quickly became viral on social media.
“It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone,” he said.
After receiving an email from a fellow law professor who had utilized ChatGPT to research instances of sexual harassment by academics at American law schools, Turley learned of the charges.
Professor Jonthan Turley was falsely accused of sexual harassment by AI-powered ChatGPT. Image: Getty Images
The Necessity For Caution While Utilizing AI-Generated Data
On his blog, the George Washington University professor said:
“Yesterday, President Joe Biden declared that ‘it remains to be seen’ whether Artificial Intelligence is ‘dangerous’. I would beg to differ…”
Concerns about the reliability of ChatGPT and the likelihood of future instances like the one Turley experienced have been raised as a result of his experience. The chatbot is powered by Microsoft which, the company said, has implemented upgrades to improve accuracy.
Is ChatGPT Hallucinating?
When AI produces results that are unexpected, incorrect, and not supported by real-world evidence, it is said to be having “hallucinations.”
False content, news, or information about individuals, events, or facts might result from these hallucinations. Cases like Turley’s sh
Go to Source to See Full Article
Author: Christian Encila