The launch of Grok, positioned by the company as a conversational AI assistant similar to OpenAI’s ChatGPT, prompted controversy earlier this year when users demonstrated that straightforward prompts could yield deepfake images of real individuals, including children, in sexual contexts. Critics argued that the feature presented an immediate risk of distributing child sexual abuse material (CSAM), a criminal offense under both EU and U.S. law.
Amid growing pressure, X stated in early May that it had “implemented technological measures” preventing Grok users on the social network from editing images of identifiable persons in revealing clothing such as bikinis. The firm also restricted the creation of edited images through Grok to subscribers of its paid service tier. However, the standalone Grok mobile application, which does not automatically post generated content to the public timeline, continued to allow non-paying users to produce imagery featuring women and children in sexualized poses. The Commission’s inquiry will attempt to determine whether those steps meet the DSA’s requirements for effective and proportionate risk reduction.
International Scrutiny
Regulators outside the EU have launched parallel examinations. Authorities in the United Kingdom, India, and Malaysia each confirmed separate reviews into whether Grok’s image generation tools breach local laws regarding explicit or exploitative content. The coordinated attention highlights the increasing concern among governments about the rapid deployment of generative AI products before adequate guardrails are in place.
During a panel discussion reported by CNBC, U.S. Under Secretary of State for Public Diplomacy Sarah Rogers commented on the broader deepfake challenge, saying, “Deepfakes are a troubling, frontier issue that call for tailored, thoughtful responses. Erecting a ‘Great Firewall’ to ban X, or lobotomizing AI, is neither tailored nor thoughtful. We stand ready to work with the EU on better ideas.” Her remarks underscore a trans-Atlantic debate about how to balance innovation with the need to prevent technology-enabled abuse.
History of EU Enforcement Actions Against X
The investigation into Grok is not the first time X has faced regulatory pressure under the Digital Services Act. In December 2023, the Commission imposed a €120 million ($142.3 million) fine on the company for failing to meet transparency obligations concerning its recommendation algorithms and content-moderation policies. Officials are now extending that earlier case to examine how Grok’s integration may affect the platform’s recommendation system and user safety.
The DSA assigns special responsibilities to VLOPs with more than 45 million monthly active users in the EU. These companies must conduct annual risk assessments covering illegal content, disinformation, and impacts on fundamental rights. They are also required to open their systems to independent audits and provide data access to vetted researchers. A detailed overview of the DSA’s legal text is available through the European Union’s official database, EUR-Lex.
Potential Penalties and Next Steps
Should the Commission conclude that X failed to implement adequate safeguards, the platform could face significant monetary penalties or mandatory changes to its technology. The DSA permits officials to issue periodic penalty payments to compel compliance and, in cases of repeated or severe violations, to request courts to block access to the service within the EU. The investigation will proceed with a formal information-gathering phase, during which X must submit documentation detailing its AI policies, content filters, and response timelines for reported abuse.
A spokesperson for X said the company “remains committed to cooperating with regulators” and emphasized recent updates intended to “ensure user safety.” The firm did not provide additional details regarding its internal review process or whether further limits on Grok’s capabilities are under consideration.
Industry analysts note that AI-generated imagery presents unique challenges for existing content-moderation frameworks. Unlike traditional user uploads, synthetic media can be created rapidly at scale, making detection and removal more complex. Organizations focused on child protection have argued for specialized hashing databases and real-time scanning tools to catch newly generated CSAM before it is shared publicly.
The Commission did not specify a deadline for concluding the inquiry but indicated that interim measures could be applied if investigators identify ongoing risks to EU citizens. In the coming months, officials are expected to review X’s risk-assessment reports, examine technical logs from Grok’s image generator, and consult with external experts on child safety, digital rights, and artificial intelligence.
The outcome of the case may set a precedent for how European regulators handle generative AI features on mainstream social platforms. It could also influence forthcoming legislation aimed at harmonizing rules for AI across the bloc, complementing the DSA’s broader content and transparency mandates.
Crédito da imagem: Dado Ruvic/Reuters