In brief

  • The EU has launched a probe into whether X properly assessed risks before deploying Grok’s AI features.
  • The investigation will examine X’s compliance with the region’s Digital Services Act obligations.
  • It marks Europe’s latest crackdown on AI-generated deepfakes, with multiple countries banning Grok over child safety concerns.

The European Commission launched a formal investigation Monday into whether X violated EU digital rules by allegedly failing to prevent its Grok AI chatbot from generating and spreading illegal content, including sexually explicit images of children.

The probe will assess whether the company properly evaluated and mitigated risks before deploying Grok’s image generation features, the Commission said in a Monday statement.

The Commission also said risks have materialized through the actual generation and spread of illegal sexual content, exposing EU citizens to serious harm.

It comes amid mounting international scrutiny of Grok’s role in creating non-consensual deepfakes.

Two weeks ago, X implemented restrictions, limiting image generation to paid subscribers, and added technical barriers to prevent users from digitally manipulating people into revealing clothing. The company also geoblocked the feature in jurisdictions where such content is illegal.

Despite these measures, researchers found that about one-third of the sexualized images of children identified in the CCDH sample remained accessible on X’s platform.

“With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens—including those of women and children—as collateral damage of its service,” Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, said in the statement.

Earlier this month, EU Commission spokesperson Thomas Regnier condemned X’s “Spicy Mode” feature at a Brussels press conference.

“This is not spicy. This is illegal. This is appalling. This is disgusting. This has no place in Europe,” Regnier said.

Fraser Edwards, co-founder and CEO of cheqd, told Decrypt that “every creator should be able to control how their likeness is used in AI-generated media.” 

He says the “backlash around deepfake abuse underscores a basic failure of the internet itself.” 

“There is still no native way to verify who created a piece of synthetic content or whether its use was ever authorised”, Edwards added, leaving liability to continue “defaulting to intermediaries like X rather than the people responsible for generating the abuse.”

If proven, the failures under investigation would constitute infringements of Articles 34(1) and (2), 35(1), and 42(2) of the Digital Services Act, which require platforms to assess and mitigate systemic risks, including illegal content dissemination and negative effects related to gender-based violence.

The EU investigation extends a late-2023 DSA case that resulted in a $140 million (€120 million) fine against X in December for deceptive design, ad transparency failures, and limited researcher access.

The Commission has since expanded scrutiny to Grok, including prior concerns over antisemitic content generated by the chatbot.

Decrypt has reached out to xAI for further comment.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Go to Source to See Full Article
Author: Vismaya V

BTC NewswireAuthor posts

BTC Newswire Crypto News at your Fingertips

Comments are disabled.