EU Investigates X Over Sexually Explicit Deepfake Images Generated by Grok Chatbot - Trance Living

EU Investigates X Over Sexually Explicit Deepfake Images Generated by Grok Chatbot

The European Commission has opened a formal investigation into Elon Musk’s social media platform X after reports that the company’s artificial-intelligence chatbot, Grok, enabled users to create and distribute sexually explicit images, including content that may qualify as child sexual abuse material. The probe, announced on Monday, is the latest action taken under the European Union’s Digital Services Act (DSA), a regulatory framework that sets extensive obligations for large online platforms operating in the 27-nation bloc.

According to a Commission statement, officials will examine whether X properly assessed and mitigated systemic risks before rolling out Grok’s image generation features to users in the EU. Regulators intend to focus on potential violations related to “manipulated sexually explicit images” and other illegal content that might circulate through the service. The DSA requires platforms designated as Very Large Online Platforms (VLOPs) to implement safeguards that prevent the rapid spread of harmful or unlawful material and to demonstrate effective crisis-response mechanisms when violations occur.

Scope of the Inquiry

Investigators plan to review internal risk assessments, content-moderation procedures, and any technological controls X has deployed since media outlets and watchdog groups flagged Grok-generated images depicting minors and adults in sexualized scenarios. The Commission said those risks “seem to have materialised, exposing citizens in the EU to serious harm.” Under the DSA, a company found to be out of compliance can face fines of up to 6% of its global annual turnover or, in extreme cases, service suspension within the EU.

The launch of Grok, positioned by the company as a conversational AI assistant similar to OpenAI’s ChatGPT, prompted controversy earlier this year when users demonstrated that straightforward prompts could yield deepfake images of real individuals, including children, in sexual contexts. Critics argued that the feature presented an immediate risk of distributing child sexual abuse material (CSAM), a criminal offense under both EU and U.S. law.

Amid growing pressure, X stated in early May that it had “implemented technological measures” preventing Grok users on the social network from editing images of identifiable persons in revealing clothing such as bikinis. The firm also restricted the creation of edited images through Grok to subscribers of its paid service tier. However, the standalone Grok mobile application, which does not automatically post generated content to the public timeline, continued to allow non-paying users to produce imagery featuring women and children in sexualized poses. The Commission’s inquiry will attempt to determine whether those steps meet the DSA’s requirements for effective and proportionate risk reduction.

International Scrutiny

Regulators outside the EU have launched parallel examinations. Authorities in the United Kingdom, India, and Malaysia each confirmed separate reviews into whether Grok’s image generation tools breach local laws regarding explicit or exploitative content. The coordinated attention highlights the increasing concern among governments about the rapid deployment of generative AI products before adequate guardrails are in place.

During a panel discussion reported by CNBC, U.S. Under Secretary of State for Public Diplomacy Sarah Rogers commented on the broader deepfake challenge, saying, “Deepfakes are a troubling, frontier issue that call for tailored, thoughtful responses. Erecting a ‘Great Firewall’ to ban X, or lobotomizing AI, is neither tailored nor thoughtful. We stand ready to work with the EU on better ideas.” Her remarks underscore a trans-Atlantic debate about how to balance innovation with the need to prevent technology-enabled abuse.

History of EU Enforcement Actions Against X

The investigation into Grok is not the first time X has faced regulatory pressure under the Digital Services Act. In December 2023, the Commission imposed a €120 million ($142.3 million) fine on the company for failing to meet transparency obligations concerning its recommendation algorithms and content-moderation policies. Officials are now extending that earlier case to examine how Grok’s integration may affect the platform’s recommendation system and user safety.

The DSA assigns special responsibilities to VLOPs with more than 45 million monthly active users in the EU. These companies must conduct annual risk assessments covering illegal content, disinformation, and impacts on fundamental rights. They are also required to open their systems to independent audits and provide data access to vetted researchers. A detailed overview of the DSA’s legal text is available through the European Union’s official database, EUR-Lex.

EU Investigates X Over Sexually Explicit Deepfake Images Generated by Grok Chatbot - imagem internet 23

Imagem: imagem internet 23

Potential Penalties and Next Steps

Should the Commission conclude that X failed to implement adequate safeguards, the platform could face significant monetary penalties or mandatory changes to its technology. The DSA permits officials to issue periodic penalty payments to compel compliance and, in cases of repeated or severe violations, to request courts to block access to the service within the EU. The investigation will proceed with a formal information-gathering phase, during which X must submit documentation detailing its AI policies, content filters, and response timelines for reported abuse.

A spokesperson for X said the company “remains committed to cooperating with regulators” and emphasized recent updates intended to “ensure user safety.” The firm did not provide additional details regarding its internal review process or whether further limits on Grok’s capabilities are under consideration.

Industry analysts note that AI-generated imagery presents unique challenges for existing content-moderation frameworks. Unlike traditional user uploads, synthetic media can be created rapidly at scale, making detection and removal more complex. Organizations focused on child protection have argued for specialized hashing databases and real-time scanning tools to catch newly generated CSAM before it is shared publicly.

The Commission did not specify a deadline for concluding the inquiry but indicated that interim measures could be applied if investigators identify ongoing risks to EU citizens. In the coming months, officials are expected to review X’s risk-assessment reports, examine technical logs from Grok’s image generator, and consult with external experts on child safety, digital rights, and artificial intelligence.

The outcome of the case may set a precedent for how European regulators handle generative AI features on mainstream social platforms. It could also influence forthcoming legislation aimed at harmonizing rules for AI across the bloc, complementing the DSA’s broader content and transparency mandates.

Crédito da imagem: Dado Ruvic/Reuters

You Are Here: