If Joe Biden wants a smart and folksy AI chatbot to answer questions for him, his campaign team won’t be able to use Claude, the ChatGPT competitor from Anthropic, the company announced today.
“We don’t allow candidates to use Claude to build chatbots that can pretend to be them, and we don’t allow anyone to use Claude for targeted political campaigns,” the company announced. Violations of this policy will be met with warnings and, ultimately suspension of access to Anthropic’s services.
Anthropic’s public articulation of its “election misuse” policy comes as the potential of AI to mass generate false and misleading information, images, and videos is triggering alarm bells worldwide.
Meta implemented rules restricting the use of its AI tools in politics last fall, and OpenAI has similar policies.
Anthropic said its political protections fall into three main categories: developing and enforcing policies related to election issues, evaluating and testing models against potential misuses, and directing users to accurate voting information.
Go to Source to See Full Article
Author: Jose Antonio Lanz
Tip BTC Newswire with Cryptocurrency