UK AI rules order ICO code on children’s data

Britain has quietly passed a new AI data rule, but its effect could reach well beyond Westminster. The Data Protection Act 2018 (Code of Practice on Artificial Intelligence and Automated Decision-Making) Regulations 2026 were made on 16 April, laid before Parliament on 21 April, and come into force on 12 May. They require the Information Commissioner to produce a code of practice on how personal data should be handled when AI and automated decision-making are being developed or used, and the code must cover children’s personal data too. (changeflow.com) If you are wondering whether this is a brand-new UK AI law in the broad sense, the answer is no. It is narrower and more specific than that. The regulations apply across England and Wales, Scotland and Northern Ireland, and they sit inside existing data protection law: the UK GDPR and the Data Protection Act 2018, excluding Part 4 on intelligence services processing. (changeflow.com)

This is the sort of legal change that is easy to misread. The regulations do not create a full rulebook for every school, app, employer or public body to follow tomorrow morning. Instead, they tell the Information Commissioner to write formal guidance on 'good practice' for AI and automated decision-making under data protection law. In plain English, the Government has decided the UK now needs a dedicated guide to what responsible use should look like when personal data and AI meet. (changeflow.com) The Explanatory Note adds another useful detail. No full impact assessment was produced for the instrument itself because no significant impact was expected from the regulations alone. That tells you something important: the heavier detail is still ahead. The Commissioner must produce an impact assessment when the actual code is prepared, which is when the practical effects should become much clearer. (changeflow.com)

To see why this matters, it helps to slow the legal language down. The ICO says automated decision-making means decisions made without human involvement, such as an online credit decision or a recruitment aptitude test judged by pre-programmed criteria. At the same time, the Data (Use and Access) Act 2025 rewrote this area of law, replacing the old Article 22 structure with new Articles 22A to 22D and setting rules for significant decisions based solely on automated processing. (ico.org.uk) Those safeguards are not small. The official notes to the 2025 Act say people must be told when significant decisions are taken solely by automated processing, must be able to contest or make representations about the decision, and must be able to ask for human intervention. The ICO’s public guidance also says organisations must not make these kinds of serious decisions unless a lawful route applies, such as law, contract or explicit consent. If software is judging whether you get a job interview, credit, or another outcome that really affects your life, that is the standard the law is trying to protect. (legislation.gov.uk)

The children’s data point is not a side note. The regulations expressly say the code must include guidance on good practice for processing children’s personal data. That fits with existing ICO guidance, which says organisations using children’s personal data for profiling or automated decisions should carry out a data protection impact assessment because this kind of processing is likely to create a high risk to children’s rights and freedoms. (changeflow.com) The ICO goes further than that. It says this sort of processing should not be the norm for children, and that if it is used, children must be told about it in language they can understand, with clear routes to human intervention and challenge. For us, that turns this into more than a Westminster paperwork story. It matters anywhere young people are being scored, sorted, nudged or profiled with personal data, including online services and education settings. (ico.org.uk)

There is, however, one line in the small print that deserves more attention. The code will cover the UK GDPR and the Data Protection Act 2018, but not Part 4 of that Act, which deals with intelligence services processing. On top of that, the regulations add a new rule saying the panel set up to consider the code must not consider or report on any aspect of it relating to national security. (changeflow.com) That does not mean national security uses of AI are unregulated. It does mean this particular code will not be the place where the public gets a fuller panel-led discussion of those uses. So we can reasonably say this is a code about mainstream data protection practice, not a full public map of every sensitive state use of AI and personal data. (changeflow.com)

This did not appear out of nowhere. In March 2026, the ICO said the Government was developing the secondary legislation that would require the office to produce an AI and automated decision-making code of practice, and that preparation work was already under way. The ICO also said its draft guidance on automated decision-making would feed into parts of the future code. (ico.org.uk) That draft guidance consultation is now open. According to the ICO, it began on 31 March 2026 and runs until 29 May 2026, following the introduction of the Data (Use and Access) Act 2025. So this new regulation is not a surprise bolt from the blue; it is the legal step that turns a promised code into a formal duty. (ico.org.uk)

If you are a teacher, student, parent or just someone who lives half your life online, there are two things to keep hold of. First, if an organisation wants to use AI to make significant decisions about people, data protection is still very much in the room. The law points towards explanation, challenge and real human review, not a shrug and a black box. That matters to employers, public bodies, tech firms and anyone building tools that affect people’s opportunities, money, reputation or choices. (legislation.gov.uk) Second, remember the date 12 May 2026, when the regulations take effect. But the bigger test comes after that, when the Information Commissioner publishes the code and its impact assessment. That is the point at which we will be able to judge whether the UK’s talk about safe, fair AI turns into clear standards that young people and the rest of us can actually use. (changeflow.com)

← Back to Stories