The Speech Cut
The Online Safety Act 2023 is now being enforced. Illegal harms duties became enforceable in March 2025. Children’s safety codes came into force in July 2025, requiring platforms to implement age verification and configure algorithms to filter harmful content from children’s feeds. The Act is being rolled out in phases, with full implementation expected in 2026.^[1]
A parliamentary petition calling for the Act’s repeal gathered 550,137 signatures. It was debated in the House of Commons on 15 December 2025. The government said it had no plans to repeal.^[2]
The Act was sold as child protection. Nobody is against child protection.
What the Act also does: it imposes compliance obligations so broad that small, non-commercial online forums — communities discussing trains, football, video games — have begun shutting down because they lack the resources to comply. Wikipedia launched a judicial review challenging its potential designation as a Category 1 service, warning that compliance would compromise its open editing model. The High Court rejected the challenge but left the door open for a future one.^[3]
The original draft of this law contained provisions targeting content that was “legal but harmful to adults” — a category that would have required platforms to moderate speech that broke no law. Parliament removed those provisions before the Act received Royal Assent. But the architecture that would have supported them remains. The Act gives the Secretary of State power to direct Ofcom to modify codes of practice. Ofcom defines what safety measures platforms must implement. Platforms comply using automated content moderation systems. The chain from political decision to algorithmic enforcement is intact even without the specific “legal but harmful” label.^[4]
The tools that enforce this law are not neutral. They are built by content moderation teams within technology companies, trained on datasets that reflect the judgments of their creators, and deployed at a scale where individual context disappears. A post that is ironic, regional, confrontational, or simply uncomfortable is processed by the same system that processes genuine threats. The algorithm does not distinguish between dangerous speech and difficult speech.
Nobody voted to outsource the boundaries of acceptable expression to automated systems. It happened through a law, implemented through codes of practice, enforced through content moderation tools, built by people who broadly agree on what harmful speech looks like.
The problem is not that they are wrong about some things. The problem is that they are the only ones in the room.
Question: When speech is moderated by systems built by people who share the same values, which voices get smoothed away?
Footnotes
^[1] GOV.UK, “Online Safety Act: explainer.” https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer
^[2] UK Parliament petition 722903, “Repeal the Online Safety Act” (550,137 signatures; debated 15 December 2025). https://petition.parliament.uk/petitions/722903
^[3] Wikipedia judicial review: Online Safety Act 2023, Wikipedia article. https://en.wikipedia.org/wiki/Online_Safety_Act_2023
^[4] Online Safety Act 2023, section 44 (Secretary of State powers to direct Ofcom on codes of practice). https://www.legislation.gov.uk/ukpga/2023/50
Morgan Hale is independent verification without the editorial filter. Every cut is evidenced. Every question is open. Because it matters

