Grok AI generate sexual images of minors

While Musk presented ‘Grok’ as a no-holds-barred alternative to other AI models out there, Grok however is beginning the year with mounting scrutiny following revelations that it was used to generate non-consensual sexualized images, including material involving minors.

Reports emerging in late December detailed how users exploited Grok’s image editing and generation features to create explicit depictions of women and children, some of which meet legal definitions of child sexual abuse material under multiple jurisdictions. This has, as expected, triggered outrage on the platform and renewed concerns over the adequacy of safety systems embedded in generative AI tools that operate directly inside social networks.

The incident intensified after Grok itself issued a public apology on January 1, acknowledging that an image depicting two girls estimated to be between 12 and 16 years old in sexualized attire had been generated and shared. “We’ve identified lapses in safeguards and are urgently fixing them,” said xAI’s AI chatbot on X, Grok.

For its part, xAI had promoted Grok as a less constrained alternative to rival models, emphasizing reduced censorship and maximal truth-seeking. However, the Grok incident has renewed criticism that looser guardrails can translate into predictable harm when multimodal tools are released without rigorous abuse testing.

Unlike many standalone AI tools, Grok operates natively inside X, generating text and images that appear as regular posts on the social network. Users can invoke the chatbot simply by tagging its account. xAI has previously marketed Grok as less restrictive than competing chatbots, a positioning reinforced by features such as “Spicy Mode,” which allows partial adult nudity and sexually suggestive content. Following the controversy, X restricted access to some of Grok’s media features, making it harder to browse or document generated images. The company has not clarified whether this change is temporary or whether additional technical controls are being deployed behind the scenes.

Grok’s no-safeguards approach has already generated regulatory scrutiny globally. India’s IT ministry, meanwhile, told X’s India unit in a letter that the platform failed to prevent misuse of Grok to generate and circulate obscene and sexually explicit content of women. While French Government ministers have reported Grok’s content to prosecutors, saying in a statement on Friday that the “sexual and sexist” content was “manifestly illegal.” They said they also reported the content to French media regulator Arcom to see whether it complied with the European Union’s Digital Services Act.

The Grok episode does not exist in isolation, and emerges against a backdrop of sharp growth in AI-generated child sexual abuse imagery. Image generators such as Stable Diffusion and Midjourney have faced sustained criticism after users created deepfake pornography and explicit non-consensual imagery, often targeting private individuals with no public recourse.

The rapid realism of AI-generated imagery has blurred legal distinctions that once separated synthetic content from real-world abuse. In the US, courts and prosecutors increasingly treat AI-generated sexual imagery of minors as illegal regardless of whether an actual child was involved, based on standards that focus on visual indistinguishability and potential harm.

The UK and EU have adopted similar approaches, emphasizing prevention obligations for platforms, while India’s regulatory framework allows authorities to revoke intermediary protections if unlawful content is not addressed swiftly and decisively.

The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →