Key Points
- Japan has formally launched a probe into Grok AI over its capacity to generate inappropriate and sexualised images, demanding immediate improvements.
- Minister Kimi Onoda said authorities could pursue legal action if X Corp fails to address concerns or improve safeguards.
- The investigation adds to global scrutiny of Grok, with other countries taking regulatory or access-blocking measures over harmful AI content.
Japan’s government said it has launched a formal investigation into Elon Musk’s AI service Grok amid mounting concerns over the chatbot’s ability to generate inappropriate and sexualised images of real people. The probe, conducted by the Japanese Cabinet Office, follows similar actions by regulators in countries like the UK and Canada, highlighting a growing global backlash against the AI tool’s content controls.
Economic Security Minister Kimi Onoda, who also oversees Japan’s AI strategy, said officials have asked X Corp — the social media company hosting Grok — to implement immediate technical safeguards to curb problematic outputs. However, she noted that no response has yet been received, and that Japan will consider all legal options, including possible enforcement measures, if improvements do not occur soon.
Japan’s request came as xAI, the company behind Grok, said it had already introduced restrictions aimed at blocking users from editing or generating images of people in revealing clothing where such content is illegal. These changes are part of broader efforts to address criticism that the AI chatbot was too permissive in producing controversial content.
The Japanese action reflects heightened scrutiny of generative AI tools, especially those integrated into popular platforms with vast user bases. Authorities in multiple jurisdictions are probing whether explicit or non-consensual image content generated by these systems violates privacy, safety or cybercrime laws and are pushing tech firms to adopt stronger preventative measures.
This investigation adds to a wave of international concern over Grok’s content governance. Some governments — including Malaysia and Indonesia — have already taken stronger steps, such as temporarily blocking access to Grok, citing the risk of obscene or harmful images spreading online. Meanwhile, regulators in the European Union are weighing enforcement actions under digital safety laws if platforms fail to tighten controls.
The controversy around Grok’s imagery has also drawn legal and advocacy pressure in the United States, where authorities such as the California Attorney General have issued cease-and-desist orders targeting non-consensual explicit content production by AI tools. Civil rights groups and lawmakers have argued that more stringent safeguards — including age restrictions and explicit content filters — are necessary to protect vulnerable users.
Critics argue that public backlash and regulatory probes underscore broader ethical and legal challenges posed by powerful generative AI systems that can create realistic representations of people without consent. Proponents of stricter governance say clearer rules and enforcement mechanisms are needed to prevent misuse, protect privacy and curb the spread of harmful material online.








