January 26, 2026
EU Launches Explosive Probe Into Elon Musk’s X Over AI Sex Images

EU Launches Explosive Probe Into Elon Musk’s X Over AI Sex Images

EU Launches Explosive Probe Into Elon Musk’s X Over AI Sex Images- The European Union has opened a sweeping investigation into Elon Musk’s social media platform X, intensifying scrutiny of the company over its AI chatbot Grok and the alleged mass production of sexually explicit images, including content that may involve child sexual abuse material.

The inquiry, announced this week by the European Commission, is being conducted under the bloc’s Digital Services Act (DSA), a landmark law designed to rein in the risks posed by large online platforms. Regulators will assess whether X properly identified and mitigated the dangers created by Grok’s image-generation capabilities, particularly the creation of non-consensual sexual deepfakes involving women and children.

According to researchers cited by EU officials, Grok was used to generate millions of sexualised images within a short period, including tens of thousands that appeared to depict minors. The findings triggered international alarm and raised questions about how quickly — and how seriously — X responded once the scale of the problem became clear.

European regulators argue that the company’s response fell short. While X initially restricted access to Grok’s image-editing tools and later introduced safeguards to prevent the manipulation of images of real people, the Commission said those steps did not address the broader, systemic risks posed by the technology. Officials remain concerned that Grok could continue to enable the creation and spread of illegal material at scale.

Henna Virkkunen, the Commission’s executive vice-president responsible for tech sovereignty, security and democracy, described non-consensual sexual deepfakes as a form of violence. She said the investigation would determine whether X upheld its legal obligations or prioritised product rollout over the fundamental rights of European users.

The probe does not stop at explicit images. Regulators are also expanding a separate investigation into X’s recommender systems — the algorithms that decide which content users see — especially in light of the company’s plan to integrate Grok more deeply into content moderation and information filtering. The Commission is examining whether these systems amplify harmful or illegal content rather than limiting its reach.

The move comes amid growing frustration in Brussels over what lawmakers see as X’s repeated failures to comply with EU rules. Just weeks ago, the platform was fined €120 million for breaching the DSA by misleading users with so-called blue tick verification, obstructing independent researchers, and failing to meet transparency requirements around advertising. Musk responded publicly by dismissing the fine and attacking EU institutions.

Problems Beyond Europe

The EU is not alone in its concerns. Since Musk acquired X, the platform has faced regulatory, legal and political pressure across multiple regions.

In the United Kingdom, media regulator Ofcom has launched its own investigation into illegal and harmful content on X under the country’s Online Safety framework. UK authorities have raised alarms about the platform’s handling of abusive material, disinformation and content harmful to children.

In the United States, X has avoided a single overarching regulator like the EU’s Commission, but it has faced lawsuits, advertiser backlash and scrutiny from state-level authorities. Civil society groups have repeatedly accused the platform of weakening safeguards against harassment, hate speech and child exploitation since Musk cut trust and safety teams. Several major advertisers paused or ended campaigns, citing concerns over brand safety.

Brazil has emerged as another flashpoint. Authorities there have clashed with Musk over content moderation, misinformation and compliance with court orders. At times, Brazilian officials have threatened fines or restrictions if X failed to remove accounts or content deemed illegal under local law, highlighting tensions between Musk’s “free speech” stance and national regulations.

In Australia, regulators have pursued X over its refusal to remove violent or harmful material, including content linked to extremist attacks. The country’s eSafety Commissioner has argued that X’s approach undermines efforts to protect users from graphic and traumatic material, particularly children.

India has also seen disputes over content takedowns, with the government ordering removals related to political speech and public order. X has challenged some directives while complying with others, placing it in an ongoing tug-of-war between local law and its global moderation policies.

A Test Case for AI Governance

The EU’s Grok investigation is shaping up to be a defining test for how governments regulate generative AI embedded inside social platforms. Unlike traditional moderation failures, AI systems can produce harmful content instantly and at massive scale, making enforcement both more urgent and more complex.

Critics say the Commission acted too slowly, but supporters argue the case could set a precedent for how AI-driven features are assessed under existing digital laws. If X is found in breach of the DSA, it could face further fines or legally binding orders to overhaul how Grok operates in Europe.

For Musk’s X, the investigation adds to a growing list of global battles — and signals that regulators are no longer willing to treat AI mishaps as experimental growing pains. Instead, they are framing them as serious risks with real-world consequences, especially for women and children.

Is Apple’s Rumored iPhone 17E Worth the Wait or Just Another Compromise? | Maya

Leave a Reply

Your email address will not be published. Required fields are marked *