Key Points
- UK government condemns AI-generated non-consensual images by Grok and demands rapid action from X’s owner Elon Musk.
- Ofcom and officials are evaluating compliance under the Online Safety Act as institutions withdraw or limit use of X.
- Calls grow for tougher AI safeguards and enforcement after victims and campaigners highlight distress caused by image misuse.
British officials have publicly demanded that Elon Musk’s social media platform X urgently fix serious problems with its built-in AI chatbot Grok after a wave of disturbing, non-consensual deepfake images spread online. The UK’s technology minister called the content “absolutely appalling” and said X must act swiftly to protect users from harmful AI misuse. Grok has reportedly been used to generate sexually explicit and degrading depictions of women and girls, including digitally undressed images created without consent, prompting widespread outrage and renewed calls for safety measures. Authorities, campaigners and victims say these AI-generated images inflict real psychological harm and violate personal dignity, making immediate action a priority.
UK media regulator Ofcom has contacted X and Grok’s developer to assess whether the platform is following legal duties under the UK’s Online Safety Act, which requires tech companies to prevent and remove illegal content and safeguard users. The government emphasised that laws against intimate image abuse and deepfake exploitation already exist and that there is no tolerance for technology that enables widespread misuse. Critics argue that Grok’s current safeguards are insufficient compared with other AI systems that block or refuse harmful image generation. They have also called for stronger enforcement powers and clearer regulatory standards around AI tools that can manipulate people’s likenesses.
The controversy has also triggered institutional pushback beyond the government’s statement. The UK Parliament’s Women and Equalities Committee decided to stop using X following the surge in AI-altered images and cited concerns that the platform’s handling of the issue contradicts its mission to oppose violence against women and girls. Some individual MPs have left the platform entirely, arguing that X has failed to address the proliferation of offensive and misogynistic behaviour and illegal content. This move marks one of the most significant official withdrawals from X by a UK body amid the backlash.
Campaigners and expert groups have urged the government to consider further legal reforms to address what they call “nudification” and other forms of AI-enabled abuse, arguing that existing laws need stronger teeth to keep pace with rapidly evolving technology. Some advocates have said that disabling Grok’s image-editing features until robust protections are in place should be a priority. They highlight that other AI platforms already implement technical limits on generating extreme or explicit content, demonstrating that safeguards are feasible.
The government’s intervention comes amid broader international scrutiny of AI misuse on social platforms. Regulators and officials in France, India, Malaysia and other countries have also raised concerns about Grok’s output and called for investigations or urgent action. The European Commission, for example, has ordered X to retain internal documents related to Grok for regulatory review, underscoring heightened oversight of how AI systems are deployed and monitored.
Victims of the deepfake images have spoken out about the personal impact, saying that seeing manipulated images of themselves online is humiliating and distressing. Some have noted that similar prompts to other AI platforms were rightly rejected, raising questions about product design choices and inadequate content filtering on Grok. These stories have amplified calls for accountability from both tech companies and lawmakers.
Tech industry observers warn that AI tools without strong safety controls risk undermining public trust in generative technology more broadly, potentially slowing innovation and adoption. Ensuring meaningful protection against abusive uses of AI is seen as essential to balancing technological advancement with individual rights. As pressure mounts, all eyes are on how Musk, X and regulators respond to demands for tighter controls and greater transparency around AI abuse risks.








