Grok, the AI chatbot developed by Elon Musk’s xAI and integrated into X (formerly Twitter), is facing worldwide condemnation after users exploited its image‑generation tools to create non‑consensual sexualised deepfakes, including images resembling minors. Governments, regulators, and digital‑safety organisations across multiple continents have now launched investigations, issued takedown orders, or even blocked Grok entirely.
This SEO‑optimised update explains what happened, why Grok is under investigation, the latest regulatory actions, and what this means for the future of online safety and AI governance.
What Triggered the Grok Deepfake Scandal?
In late December, users discovered that Grok’s image editing tools—especially Grok Imagine and its “spicy mode”—could:
- Remove clothing from real people
- Generate sexualised or explicit imagery
- Create manipulated images of women and children
- Publicly post the results instantly via X
These cases escalated rapidly when reports emerged that some images appeared to depict minors, turning a content‑moderation failure into a potential criminal‑law issue. [abcnews.go.com], [midgard.co.uk]
Because Grok is directly built into X, anyone could tag @grok under a post and generate a manipulated image within seconds—dramatically increasing amplification, reach, and harm potential.
Global Regulatory Reactions (Updated January 2026)
🇲🇾 Malaysia & 🇮🇩 Indonesia Block Grok Completely
These countries became the first to block Grok, citing serious concerns about:
- Production of sexually explicit non‑consensual images
- Risk to children and women
- Inadequate safeguards in X’s AI tools
Authorities stated that Grok’s features created a “severe digital‑safety threat” that violated national laws. [abcnews.go.com]
🇬🇧 UK Launches Formal Investigation Under the Online Safety Act
The UK’s media watchdog Ofcom opened a formal investigation after finding evidence that Grok’s tools generated:
- Undressed deepfakes
- Sexualised images of minors
- Harmful, potentially illegal content
UK officials, including the Prime Minister, labelled Grok’s outputs as “disgusting,” “unlawful,” and “not to be tolerated.” X could face fines up to 10% of global revenue if found in breach. [aljazeera.com]
🇪🇺 European Union Condemns Grok’s Output
The EU called Grok’s behaviour:
- “Illegal”
- “Appalling”
- “Disgusting”
The European Commission is now reviewing whether Grok violates the Digital Services Act (DSA)—which has significant enforcement powers. [face2faceafrica.com]
🇮🇳 India Orders Removal of Unlawful Content
India’s IT Ministry demanded that X:
- Remove explicit deepfake content
- Review Grok’s safety mechanisms
- Comply with national legality and platform governance standards
Officials warned of legal consequences if violations persist. [face2faceafrica.com]
🇧🇷 Brazil Files Legal Complaints
Brazilian lawmakers have submitted official complaints against Grok and X, arguing that its tools may violate:
- Privacy laws
- Child‑protection laws
- Consent‑based image rights
They are pushing to suspend Grok until investigations are complete. [face2faceafrica.com]
What X and xAI Have Done So Far
In response to mounting global pressure:
- Image generation/editing is now restricted to paying subscribers, reducing widespread misuse.
[abcnews.go.com] - X claims it removes illegal content and suspends offending accounts.
Musk publicly warned that users generating illegal images “will face the same consequences as if they uploaded illegal material.” [face2faceafrica.com]
Despite this, authorities continue to argue that these steps are not enough.
Why This Matters: Key Cyber‑Safety & AI Governance Issues
1. Deepfakes Spread Faster Than Moderators Can React
Grok demonstrated how AI can amplify harm at a speed that overwhelms human review.
[midgard.co.uk]
2. Platform‑Generated vs User‑Generated Content Is Blurring
Grok is tightly woven into X. A user prompt may start the process, but:
- The platform itself generates the image
- The platform itself publicly posts it
This raises regulatory questions that current laws did not anticipate.
[midgard.co.uk]
3. Governments Are Moving Toward Stricter AI Laws
The controversy is already influencing new legislative proposals around:
- Non‑consensual AI image creation
- Online child safety
- Automated content moderation
- AI accountability obligations
Multiple countries are drafting updated legal frameworks now.
[face2faceafrica.com]
Latest Developments (Mid‑January 2026)
✔ Global investigations intensify, with new inquiries opened across Europe, the UK, and Asia.
✔ Explicit deepfake generation has slowed but not stopped entirely. Third‑party apps also remain a loophole.
✔ High‑profile individuals continue to be targeted, including Sweden’s Deputy Prime Minister.
✔ Safety experts urge proactive AI restrictions, not reactive moderation.
✔ Lawmakers worldwide are proposing new deepfake‑specific criminal legislation. [nbcnews.com] [globalnews.ca] [face2faceafrica.com]
Ready to Protect Your Business?
Contact EC Computers today for a cyber security assessment and discover how we can help you stay secure.
📞 Call us: 0117 200 1000
📧 Email: Contact-us form
🌐 Visit: https://eccomputers.co.uk/cyber-security/
Further reading: Managed IT Services and Support – Keeper Password Manager
2026 All-IP Deadline and Copper Switch off
Privacy compliance and new 2025 laws
#Grok AI deepfakes #Elon Musk Grok scandal #AI sexualised images investigation #Grok Imagine spicy mode #Non‑consensual AI images #Global AI regulation 2026 #Online Safety Act Grok investigation
