UK to close Online Safety Act loophole for AI chatbots
Here’s the short version: on Monday 16 February 2026 the government said it will move within months to make the online world safer for children. The Prime Minister, Keir Starmer, set out plans to close a gap so AI chatbots must follow the Online Safety Act’s illegal‑content rules, alongside faster action on age limits, addictive features such as infinite scroll, and preserving data after a child’s death. If you’re a parent, carer or teacher, this signals real changes to the tools young people use every day. (gov.uk)
What changes with chatbots matters. Today, many large AI systems sit outside the Online Safety Act’s strictest duties. Ministers now want all chatbot providers to comply or face action. Ofcom can already issue fines up to £18 million or 10% of a company’s global turnover under the Act, and the intention is to make sure chatbots are squarely in scope. (ft.com)
Why now? Recent controversies showed the risks of AI tools that can generate illegal or harmful content on request. After UK pressure, X’s Grok limited its image‑generation feature when users demonstrated it could produce sexualised “undressing” edits, including of minors. That rollback followed direct scrutiny from ministers and regulators. (theregister.com)
The government will also launch a children’s digital wellbeing consultation next month. Expect questions on a minimum social media age (with under‑16s specifically referenced), curbs on infinite scroll and autoplay, and options to restrict or age‑gate children’s use of VPNs where these defeat safety tools. The consultation will look at changing the age of digital consent, too. None of this is a done deal yet-but officials want the power to act quickly once the evidence comes in. (gov.uk)
To speed things up, ministers plan to take new legal powers so they don’t need a fresh Act every time technology shifts. Proposals include using the Children’s Wellbeing and Schools Bill to enable targeted, faster measures after the consultation; and amending the Crime and Policing Bill so crucial social‑media data can be preserved following a child’s death. These moves would still face votes in Parliament. (gov.uk)
Parliament is already looking at related ideas. A crossbench amendment tabled in the Lords would create an offence for releasing AI chatbots that produce illegal content, signalling cross‑party concern about generative tools and child safety. This is separate from ministers’ plans, but it shows where the debate is heading. (bills.parliament.uk)
What this means for you right now: nothing changes overnight. But if you work with or care for young people, start planning for clearer age checks on platforms, tighter content controls in AI tools, and school policies that treat chatbots like any other high‑risk app. We’ll keep using plain language as the detail lands so you can explain it in class or at the kitchen table.
It’s worth stating the legal baseline. Sharing nude images of children is already a crime. The government now wants platforms and AI tools to prevent those images being created or sent in the first place, through stronger safeguards by design. That emphasis on prevention-rather than just takedown-will be central to the consultation. (gov.uk)
VPNs and digital consent are tricky areas. Policymakers are weighing up whether children’s use of VPNs that bypass safety settings should be age‑restricted, and whether the current age of digital consent still fits how services work today. Expect a debate about privacy, digital literacy and enforcement in schools. Your voice-as a parent, student or educator-will matter in that balance. (gov.uk)
Parents don’t have to wait to act. The Department for Science, Innovation and Technology has launched “You Won’t Know until You Ask”, a practical campaign with conversation starters, safety‑setting guides and advice on tackling ragebait and misogynistic content. It’s designed to help you talk to your child tonight, not in six months’ time. (gov.uk)
Child‑safety groups back faster moves. The NSPCC says swift action on age‑limit enforcement, addictive design and AI safeguards would protect children better than a blunt under‑16 ban, while the Molly Rose Foundation welcomes momentum but wants even stronger regulation. We’ll keep listening to young people’s experiences alongside expert evidence as policy firms up. (gov.uk)
What happens next: the consultation opens next month, then ministers aim to use secondary legislation to move quickly on specific measures. Ofcom remains the enforcement backstop, with the ability to fine companies up to 10% of global turnover if they break the rules-so platforms and AI providers will be paying attention. We’ll translate the detail as it arrives so you can make informed choices at home and in school. (ft.com)