In brief
- X restricted Grok image generation and editing features and limited access to paid subscribers.
- The changes followed reports of non-consensual sexualized AI images, including those involving minors.
- Regulators in California, Europe, and Australia are investigating xAI and Grok over potential violations.
X said it is restricting image generation and editing features tied to Grok, limiting access to paid users after the chatbot was used to create non-consensual sexualized images of real people, including minors.
In an update posted by the X Safety account on Wednesday, the company added technical restrictions to limit how users can edit images of real people through Grok.
The move followed reports that the AI generated sexualized pictures in response to simple prompts, including requests to place people in bikinis. In many cases, users tagged Grok directly under photos posted on X, causing the AI to generate edited images that appeared publicly in the same threads.
“We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis,” the company said, referencing the viral trend of asking Grok to put people in bikinis.
The company also said image creation and image editing through the Grok account on X are now available only to paid subscribers, a change it said is intended to improve accountability and prevent misuse of Grok’s image tools that violate the law or X’s policies. The company also instituted location-based restrictions.
“We now geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it’s illegal.”
Despite the changes, however, Grok continues to allow users to remove or alter clothing from photos uploaded directly to the AI, according to Decrypt’s testing and user reports following the announcement.
In some cases, Grok acknowledged “lapses in safeguards” after generating images of girls aged 12 to 16 in minimal clothing, conduct prohibited under the company’s own policies. The continued availability of those capabilities has drawn scrutiny from advocacy groups.
“If reports that Grok created sexualized images—particularly of children—are true, Texas law may have been broken,” Adrian Shelley, Texas director of Public Citizen, said in a statement. “Texas authorities do not have to look far to investigate these allegations. X is headquartered in the Austin area, and the state has a clear responsibility to determine whether its laws were broken and, if so, what penalties are warranted.”
Public Citizen previously called for the U.S. government to pull Grok from its list of acceptable AI models over concerns of racism exhibited by the chatbot.
Global backlash
Global policymakers have also increased scrutiny of Grok, leading to several open investigations.
The European Commission said X and xAI could face enforcement under the Digital Services Act if safeguards on Grok remained inadequate. At the same time, Australia’s eSafety Commissioner said complaints involving Grok and non-consensual AI-generated sexual images have doubled since late 2025. The regulator said AI image tools capable of producing realistic edits complicate enforcement and victim protection.
In the UK, regulators with Ofcom opened an investigation into X under the Online Safety Act stemming from Grok being used to generate illegal sexualized deepfake images, including those involving minors. Officials said Ofcom could ultimately seek court-backed measures that effectively block the service in the UK if X is found non-compliant and fails to take corrective action.
Other countries, including Malaysia, Indonesia, and South Korea, have also opened investigations into Grok in a bid to protect minors.
While States across America monitor the situation, California is the first to open an investigation into Grok. On Wednesday, California Attorney General Rob Bonta announced a probe into xAI and Grok over the creation and spread of non-consensual sexually explicit images of women and children.
“The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking. This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet,” Bonta said in a statement.
The investigation will examine whether xAI’s deployment of Grok violated state laws governing non-consensual intimate imagery and child sexual exploitation.
“I urge xAI to take immediate action to ensure this goes no further,” Bonta said. “We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or of child sexual abuse material.”
Despite the ongoing investigations, X said it takes a “zero tolerance” stance for child sexual exploitation, non-consensual nudity, and unwanted sexual content.
“We take action to remove high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity, taking appropriate action against accounts that violate our X Rules,” the company wrote. “We also report accounts seeking Child Sexual Exploitation materials to law enforcement authorities as necessary.”
Daily Debrief Newsletter
Start every day with the top news stories right now, plus original features, a podcast, videos and more.
Go to Source to See Full Article
Author: Jason Nelson
