UK tells Ofcom to act on xAI Grok image abuse

Here’s the plain update we’d want you to have if you work with young people online: the UK government has told Ofcom to move quickly after reports that xAI’s Grok still allows paying users to generate sexualised images of women and children. Technology Secretary Liz Kendall said on 9 January 2026 that the regulator should use its full powers and update the public within days. She also reminded xAI that the Online Safety Act allows Ofcom to apply to the courts to block access to non‑compliant services in the UK. (gov.uk)

Overnight, xAI limited Grok’s image generation and editing on X to paying subscribers following a wave of complaints. Reuters and Business Standard report that the standalone Grok app has still allowed image generation without a subscription, which is why many campaigners say the change does not fix the root problem. (investing.com)

Let’s get our terms straight. A deepfake is an AI‑made image, video or audio that makes it look like someone did or said something they never did. ‘Nudification’ tools remove clothing or add explicit details to an image. UK law already criminalises sharing intimate images without consent and now requires major platforms to prevent and remove this material. Ministers have pledged to ban nudification apps and to criminalise the creation of intimate images without consent; new offences have been legislated via the Data (Use and Access) Act 2025 with further provisions to be commenced, alongside measures in the Crime and Policing Bill. (reuters.com)

What can Ofcom actually do? Under the Online Safety Act, the regulator can investigate, require changes, issue fines of up to 10% of a company’s qualifying worldwide revenue, and in serious cases ask the courts to restrict access to a service in the UK. Ofcom has also published guidance focused on keeping women and girls safer online, setting expectations that platforms design out abuse and support victims. (gov.uk)

Ofcom says it has made urgent contact with X and xAI over serious concerns that Grok can produce undressed images of people and sexualised images of children. Sky News’ reporting summarised cases involving minors and carried X’s response that it removes illegal content and permanently suspends offending accounts while Ofcom assesses if an investigation is warranted. (news.sky.com)

So what changed on 9 January matters for your safety education. X now limits Grok’s image tools to paying subscribers and argues stored card details help identify misuse. But Reuters found the separate Grok app still accessible for free, and independent reporting highlights why critics consider the ‘paid‑only’ change insufficient. (investing.com)

This is not only a UK story. Italy’s data protection authority has warned about deepfake risks and signalled cross‑border enforcement with Ireland. Indonesia, one of X’s largest markets, has reportedly suspended Grok over non‑consensual sexual deepfakes. Those signals add pressure on platforms to introduce stronger safeguards. (reuters.com)

For classrooms and families, the message is simple: consent governs images, not just physical contact. For under‑18s, any sexual image-AI‑generated or not-counts as child sexual abuse material. Do not share it to ‘prove’ harm; report it promptly to the police and your school safeguarding lead. Under the Online Safety Act, services must prevent British users from encountering illegal content and remove it once they know about it. (aljazeera.com)

If you’re targeted, capture evidence safely by noting the post URL, username, time and date, then report in‑app to X and to the host service. In the UK you can contact the police and the Revenge Porn Helpline. For teachers, make reporting routes visible and practise scenarios in tutor time so students know how to respond calmly. Ofcom’s implementation timeline makes clear that enforcement is no longer optional. (gov.uk)

What happens next matters. The minister wants Ofcom’s update ‘in days not weeks’. Ofcom’s 2026 roadmap includes guidance for super‑complaints in February and ‘technology notices’ by April, tools that can compel the use of accredited tech to block child sexual abuse and terrorism content. If X or xAI fall short of their duties, Ofcom can escalate to investigations, fines or, ultimately, court‑ordered access restrictions. (gov.uk)

Why this matters for media literacy: you may hear that “it’s just memes” or “no one is hurt”, yet investigations have documented real people-especially women and girls-being targeted, with long‑lasting harm. Understanding that harm, and the law’s focus on consent, helps you call out abuse and support those affected. (theguardian.com)

← Back to Stories