Balancing Online Safety and Personal Privacy: A Critical Examination of the UK Online Safety Act
The Rise of Digital Regulation
In the last decade, the digital world has become a central arena of both empowerment and harm. From disinformation to online harassment, suicide forums to AI-generated child sexual abuse material, the internet now reflects — and amplifies — the best and worst of human behaviour. In response, governments across the globe have begun introducing sweeping legislation to tame the online sphere. The UK’s Online Safety Act is one of the most ambitious and far-reaching of these efforts.
🔄 The Shift in Regulatory Philosophy
Historically, the internet operated on a principle of limited liability and platform neutrality. Major platforms like YouTube or Facebook were not legally responsible for user-generated content unless notified of illegal material. This created an environment of rapid innovation — but also left enormous gaps in accountability, particularly in cases involving children, abuse, or misinformation.
In contrast, the Online Safety Act marks a shift toward proactive platform responsibility. Companies are no longer allowed to be passive hosts. Instead, they must actively prevent, detect, and remove harmful or illegal content, often before it is reported. The emphasis is on risk management, not just content moderation.
🌍 A Global Trend
The UK is not alone. The EU Digital Services Act (DSA), the US Kids Online Safety Act (KOSA), Australia’s Online Safety Act, and India’s IT Rules all reflect the same underlying trend: giving governments regulatory control over digital spaces in the name of protecting users. What sets the UK apart is the breadth and enforcement power granted to Ofcom, and the Act's potential to become a model for other English-speaking democracies.
But while these laws are presented as tools for online safety, they also raise profound questions about freedom of expression, privacy, state surveillance, and the power imbalance between governments, tech giants, and users.
⚖️ What’s Being Regulated — And Who Decides?
The UK Online Safety Act regulates:
What content is legal but harmful
Which groups deserve heightened protection (e.g., children, women, ethnic minorities)
How platforms must design algorithms, verify ages, and enforce terms of service
In effect, the law empowers both regulators and platforms to act as arbiters of online morality — deciding what users should and should not see. This raises a critical issue: who defines “harm”, and on what basis?
As we will argue throughout this guide, many of these definitions are dangerously vague, open to overreach, and lack proper oversight or appeal mechanisms. While the stated goal is user protection, the law introduces mechanisms of control that can be — and likely will be — misused.
🧠 Why a Critical Lens Is Essential
To blindly accept "safety" as a justification for regulation is to ignore history. Every system of censorship and surveillance has begun by targeting content deemed offensive, dangerous, or deviant. The key distinction is not whether regulation is necessary — it is. The real question is how we regulate the internet without sacrificing the foundational rights of privacy, free expression, and anonymity.
This guide proceeds from that question — not in denial of online harms, but in recognition that solutions must not create new harms in their place.
What the Online Safety Act Gets Right
It is easy — and often justified — to focus on the threats the Online Safety Act poses to privacy and freedom. But to critique fairly and constructively, we must also acknowledge what the Act gets right. Online harm is real. Digital platforms have, for too long, prioritised engagement and profit over user safety. And the legislative vacuum allowed avoidable tragedies — especially involving children — to persist for years.
This section outlines key strengths of the Online Safety Act, particularly where it addresses urgent, well-evidenced gaps in platform accountability.
👶 A Focus on Child Protection
The strongest and most defensible feature of the Act is its commitment to child safety.
What it mandates:
Platforms likely to be accessed by children must conduct risk assessments, design age-appropriate experiences, and deploy robust age assurance mechanisms.
Harmful but legal content (e.g. self-harm forums, eating disorder “thinspiration”, or extreme pornography) must be restricted or inaccessible to children.
Age limits must not just be stated, but actively and consistently enforced.
Why this matters:
Tech companies have historically claimed they cannot identify or protect children, despite engineering highly personalized recommendation engines.
Children, especially those under 13, are regularly exposed to violent, sexual, or mentally distressing content with little to no recourse.
The Act finally shifts the burden of safety away from individual parents and onto the companies that profit from children's data and attention.
⚖️ A Legal Duty to Tackle Illegal Content
The Act introduces enforceable obligations for platforms to:
Prevent, detect, and remove a defined set of priority illegal content, including:
Child sexual abuse material (CSAM)
Terrorism-related content
Intimate image abuse
Online fraud and scams
Incitement to violence
Respond promptly when such content is flagged, and design systems that reduce the likelihood of it appearing at all.
This turns what used to be a best-effort moderation policy into a legally enforceable duty. Failure to comply can result in:
Fines of up to £18 million or 10% of global revenue
Criminal prosecution of senior executives in cases of persistent non-compliance
These powers send a message that "platforms are not above the law" — a welcome corrective to years of under-enforcement.
📖 Codes of Practice and Independent Regulation
Unlike previous tech regulation efforts that relied on vague principles, the Online Safety Act introduces:
Detailed codes of practice, developed and enforced by Ofcom
Requirements for public consultation, with stakeholder input (e.g. Victim’s Commissioner, children’s advocates)
Clear publication of risk assessment guidance, safety roadmaps, and enforcement timelines
This regulatory infrastructure provides a clearer framework for compliance than industry self-regulation ever offered. Ofcom’s phased implementation — including timelines for illegal content enforcement and children's risk assessments — offers predictability and transparency, at least in theory.
🌐 Accountability for Foreign-Based Platforms
A major loophole in earlier online regulation was jurisdictional evasion — companies outside the UK could often dodge liability. The Online Safety Act closes this by applying duties to:
Any service accessible in the UK, if it poses a material risk of harm
Platforms that target UK users or host large UK audiences
This ensures that safety standards apply regardless of corporate headquarters — a crucial step toward global digital responsibility.
🛠️ Empowering Users, Especially Adults
Though often overlooked, the Act also introduces optional tools for adult users:
Tools to filter or avoid harmful but legal content (e.g. content promoting suicide or eating disorders)
Options to limit engagement with non-verified users — offering protection from anonymous abuse
Requirements for platforms to enforce their own terms of service, preventing companies from ignoring abuse or harassment they claim to prohibit
This empowers users to shape their own content experiences without defaulting to censorship.
✅ Simply Put: A Foundation Worth Building On… Carefully
The Online Safety Act addresses real problems:
The chronic failure of platforms to protect children
The need for strong responses to online criminality
The lack of consistent, enforceable standards for user safety
These are not abstract concerns — they are rooted in public tragedies, systemic platform failures, and years of digital harm. While this guide will go on to argue that the Act overreaches and creates dangerous new risks, it’s essential to start here: some of the Act’s goals are not only legitimate — they are overdue.
The challenge now is to protect these goals while defending civil liberties, especially the rights to privacy, expression, and anonymity.
Core Critique: Safety vs Surveillance
The Online Safety Act presents itself as a landmark in user protection — a declaration that the era of platform impunity is over, and that the digital world must be subject to the same moral and legal boundaries as the physical one. But beneath this promise of safety lies something far more unsettling: a creeping, systemic shift toward surveillance and control, quietly embedded in the name of protection.
At its heart, the Act does not merely regulate platforms. It transforms them. No longer are they just services that host content — they are now, functionally, instruments of state enforcement, charged with identifying, categorising, and pre-emptively suppressing speech and behaviour deemed harmful or undesirable. In doing so, the Act draws from the architecture of surveillance, demanding the widespread collection and analysis of user data to fulfil its legal obligations.
From Moderation to Monitoring
It’s important to understand just how dramatic a shift the Act mandates. Previously, most platforms operated under a “notice and takedown” model — content would be removed once reported and verified as violating terms or laws. The Act flips this model entirely. Now, companies must not only remove harmful or illegal content after it appears; they are legally required to identify it in advance — to build systems that predict, detect, and suppress it before it reaches users at all.
This can only be achieved through constant, pervasive monitoring. Automated tools, AI classifiers, real-time behaviour analysis — these are no longer optional features for large platforms; they are statutory obligations. To “protect” users, especially children, platforms must understand who their users are, what they are doing, what they are saying, and in many cases, what they are likely to do next.
In effect, the line between content moderation and mass surveillance has been erased. The government, through Ofcom, now mandates not just what must be removed, but how platforms must structure their systems — right down to their algorithms, user flows, and data practices.
The Death of Anonymity by Design
One of the most insidious casualties of the Online Safety Act is anonymity. The law doesn’t explicitly outlaw anonymous speech — but it quietly makes it almost impossible to preserve. Through its age assurance requirements and identity verification tools, the Act demands platforms know who is using their service and whether they’re telling the truth about it. What begins as a reasonable aim — protecting children from pornography or grooming — ends with the slow erosion of the right to exist online without being watched.
The issue isn’t just that age checks may require ID, facial recognition, or behavioural biometrics — though all of those are likely outcomes. It’s that anonymity itself becomes suspicious. Services are pushed to treat unknown users as risks, to restrict their access, deprioritise their content, or cut them off altogether. Users are encouraged to verify themselves to unlock features, to speak, to be seen.
And once verified, you are no longer anonymous — your presence is now part of a record, potentially available to law enforcement, to regulators, or, under less transparent conditions, to data brokers and foreign governments.
This fundamentally changes the shape of the internet. It criminalises pseudonymity. It devalues privacy. It creates a tiered system in which the “trusted” user — the verified, compliant, traceable individual — is privileged, while the anonymous user is treated as inherently suspect.
Risk Assessments or Behavioural Profiling?
Central to the Act is the idea that platforms must assess the risks their services pose to users. On the surface, this sounds like common sense. In practice, it means building a continuous behavioural profile of user interactions — what they click, read, search, post, or linger on. Platforms must track and predict risk exposure, not just based on content, but on how algorithms shape user behaviour.
This is not regulation — it’s a mandate for predictive analytics. A platform’s legal safety duties are now tied directly to its capacity to detect patterns of behaviour and “manage” them, often through algorithmic filtering, recommendation suppression, or automated flagging systems. What was once used to maximise engagement is now being used to enforce conformity to new social norms, enforced at the level of code.
Ofcom’s guidelines require platforms to assess how their algorithmic systems might increase exposure to harmful content. But what happens when the very act of viewing certain topics — mental health, sexual identity, drug use — is treated as a risk indicator? What stops a child searching for information about LGBTQ+ support from being flagged, filtered, or nudged away?
The answer, alarmingly, is nothing.
The Power of Enforcement
Finally, there is the matter of enforcement. The Act gives Ofcom sweeping powers: to inspect, demand data, fine companies up to 10% of global turnover, prosecute executives, and — in the most extreme cases — disconnect non-compliant platforms entirely. On paper, this seems like a necessary toolset to take on tech giants.
But when paired with vague definitions of harm, and near-total discretion in enforcement, this power becomes dangerous. A platform may restrict content not because it believes it’s harmful, but because the risk of regulatory retaliation outweighs the value of user expression. Over time, this chilling effect reshapes online culture — not through bans or censorship, but through quiet, preventive suppression.
The message to platforms is clear: if you’re not sure, block it.
The Online Safety Act reframes the internet as a risk environment, in which users — especially children — must be protected from each other, and from themselves. In this logic, data collection is safety. Monitoring is care. Suppression is protection.
But when safety is achieved through surveillance, what is lost is not only privacy, but freedom — the freedom to explore, to dissent, to express vulnerability, to be unseen. And once those freedoms are surrendered, they are rarely given back.
Structural Risks and Rights Violations
For all its promises of protection, the Online Safety Act is constructed on a foundation that risks undermining fundamental rights. Its most dangerous feature is not its stated aims but its infrastructure of control — the systems it compels platforms to build, and the data it forces them to collect. Once these systems exist, they are almost impossible to contain. They create long-term structural risks that extend far beyond the current government or the specific harms the Act claims to target.
The Normalisation of Data Extraction
The Act’s safety mechanisms rely heavily on surveillance-like data practices: age verification, behavioural monitoring, and algorithmic risk assessment. While these may be intended to flag illegal content or protect children, they also normalize the mass collection of personal data.
Every user interaction — from what you click, to who you interact with, to the content you linger on — becomes part of a compliance dataset. Over time, these datasets form deep behavioural profiles. At best, these profiles are monetised by the companies that hold them. At worst, they can be accessed by governments, both domestic and foreign, in ways that are opaque to the public.
When data collection becomes a prerequisite for simply existing online, privacy stops being a default right and becomes a luxury — available only to those who can pay for privacy-protecting tools or navigate complex anonymisation techniques.
The Chilling Effect on Free Expression
The Act’s definitions of “harmful content” are deliberately broad. Harm does not have to be illegal to be regulated; it only needs to be deemed “potentially harmful” to certain categories of users. This is where the law’s structure invites overreach. Platforms, under threat of heavy fines or executive prosecution, are incentivised to over-remove content rather than risk non-compliance.
The result? A chilling effect on free expression.
Discussions around mental health, self-harm, or sexuality could be flagged or hidden, even if they are supportive or educational.
Political speech — especially content that is controversial, critical of government, or tied to minority identities — may be deemed too risky to host.
Small platforms, lacking the resources for nuanced moderation, may simply ban entire categories of speech to avoid legal exposure.
This is not a hypothetical problem. We have seen similar chilling effects in other jurisdictions — for instance, in Germany’s NetzDG law, where platforms routinely remove content that is perfectly legal because the cost of error is too high.
Cross-Border Consequences
The Online Safety Act claims jurisdiction over platforms outside the UK if they are accessible to UK users. This means international companies must either comply with UK standards or risk being blocked. While this might sound like a victory for digital sovereignty, it also creates a global chain of surveillance obligations.
Consider a user in the UK interacting with a platform hosted abroad. To comply with UK laws, that foreign platform might need to collect and store personal data, even for users outside the UK. Once that data exists, it becomes vulnerable to foreign intelligence requests, hacking, or misuse.
This is particularly dangerous for at-risk groups — LGBTQ+ individuals, political activists, or people living under oppressive regimes. A young person visiting a forum about queer identity might not just be exposing themselves to moderation algorithms but, through poorly secured verification processes, could have their data leaked or shared across jurisdictions where being LGBTQ+ is criminalised.
The Fragility of “Safety” Infrastructure
The Act assumes that the systems it mandates will only ever be used for noble ends: keeping children safe, tackling harmful content, and stopping crime. But history shows that surveillance infrastructure rarely stays limited to its original scope.
Tools created to fight terrorism have been used to track protest movements.
Databases intended for public health have been accessed by law enforcement for unrelated investigations.
Social credit-style systems in some countries started as “safety and trust” mechanisms before evolving into tools of population control.
By building a framework of constant monitoring into the architecture of the internet, the Act creates a tempting apparatus for misuse. Today, it may be Ofcom and child protection advocates. Tomorrow, it could be a government less interested in safety and more interested in silencing dissent or policing personal identity.
The Erosion of Anonymous Spaces
One of the internet’s greatest strengths has been the ability for individuals to explore identities, seek help, and speak freely without attaching their real names or personal details. The Online Safety Act, by forcing platforms to verify user identities or apply age-assurance, effectively destroys anonymous spaces.
This disproportionately harms:
LGBTQ+ individuals, especially those not “out” in real life, who rely on pseudonymous communities for support.
Survivors of abuse, who often need to speak or seek help without fear of being found.
Political dissidents and journalists, whose work depends on remaining outside the surveillance of hostile regimes.
In the name of safety, the Act threatens to silence precisely the voices that need protection the most.
The structural risks are clear: once platforms are required to gather personal data and monitor user behaviour, the line between safety and control blurs. What begins as a child protection measure risks evolving into a systemic framework for censorship, profiling, and surveillance. And because these mechanisms are built into the private architecture of platforms, they operate quietly, invisibly — with no clear way for the average user to opt out.
Discriminatory Outcomes and Weaponized Infrastructure
The Online Safety Act does not explicitly target any one group. It presents itself as neutral — a universal framework to reduce harm, increase accountability, and protect users. But law, like technology, does not exist in a vacuum. It operates within existing power structures, cultural norms, and geopolitical realities. And in practice, systems built under the banner of neutrality often amplify discrimination rather than remove it.
The Act’s mechanisms — surveillance, identity checks, behavioural profiling — carry uneven consequences. Those most at risk of abuse and censorship are often the very communities the law claims to protect. The infrastructure of the Act, once deployed, can be weaponized — not always by design, but inevitably by function.
The LGBTQ+ Paradox: From Protection to Policing
The Act includes commendable provisions to protect users from hate speech, abuse, and harassment — much of which disproportionately targets LGBTQ+ individuals. However, the same features designed to offer safety can be turned inward, becoming tools of exposure, exclusion, and harm.
For LGBTQ+ users — especially minors or individuals in hostile households or countries — anonymity online is not a luxury, but a lifeline. It is often the only way to ask questions, find solidarity, or build identity away from judgement or violence. But with age verification and identity checks now becoming central to platform design, these users face an impossible choice: out yourself, or disappear.
Moreover, content about LGBTQ+ identity, health, or rights could be swept up in moderation filters as “sensitive,” “adult,” or “controversial,” even when educational or affirming. We already see this pattern on TikTok, Instagram, and YouTube, where algorithms quietly suppress queer content — not because it violates rules, but because it triggers vague risk parameters.
The Online Safety Act legitimises this kind of algorithmic discrimination. Under the pressure to avoid regulatory penalties, platforms are likely to over-filter anything that could be interpreted — however remotely — as sexual, deviant, or morally contentious. In practice, this means LGBTQ+ users are more likely to be shadowbanned, deplatformed, or excluded from spaces they helped build.
Marginalised Communities as Collateral Damage
The harms extend beyond LGBTQ+ groups. Ethnic and religious minorities, migrants, sex workers, and survivors of abuse are all vulnerable to the consequences of systems designed to prioritise “mainstream safety.” These groups frequently rely on alternative or fringe platforms — ones less likely to meet the technical standards or compliance costs imposed by the Act.
Smaller platforms, unable to afford robust moderation teams or verification tools, may simply shut down or exclude entire user demographics rather than risk legal exposure. This risks pushing marginalised voices out of moderated spaces and into unregulated, dangerous ones, where they are more exposed to exploitation, harassment, and abuse.
Even when platforms comply, the tools they build — facial recognition, document verification, behavioural analytics — reflect the biases of their creators. Facial recognition systems, for example, have been shown to disproportionately misidentify darker-skinned individuals, transgender people, and those with disabilities. Age-prediction AI is notoriously inaccurate for ethnic minorities. Yet the Act assumes these tools can reliably distinguish between adults and children, or between safe and risky users.
What happens when a teenager from a refugee background is misclassified and denied access to support resources? Or when a woman escaping domestic abuse is flagged as suspicious because she uses a VPN to mask her location? These are not edge cases — they are predictable failures of a system that was never designed to understand them.
Surveillance as a Cross-Border Threat
One of the most alarming implications of the Act’s infrastructure is how easily it could be repurposed by foreign regimes. The UK may promise that user data is protected and that the Act will not be used to criminalise identity. But the tools it mandates — data collection, identity verification, content tracking — are jurisdictionally agnostic.
Imagine a user from a criminalised LGBTQ+ community travels abroad. Their browsing history, flagged content interactions, or ID-linked activity on UK-regulated platforms could make them visible to immigration authorities, customs agents, or foreign surveillance networks. This is not speculative: authoritarian regimes already use metadata, social media posts, and app activity to identify dissidents, queer people, and religious minorities.
The Act creates a unified digital profile system for every user it touches — and once built, such systems rarely remain under the control of the people they are meant to protect.
Weaponised Moderation and Political Censorship
Finally, the power granted to platforms under the Act — to remove, deprioritise, or restrict content in order to remain compliant — is ripe for political abuse. It opens the door to what could become a shadow censorship regime, in which controversial or oppositional speech is quietly suppressed under the pretext of “user safety.”
Content about protests, police misconduct, racial justice, or controversial historical events may be algorithmically downgraded because it “risks distress” or has been “reported at scale.” The Act does not require due process, transparency, or appeal at the level of automated moderation. It simply sets thresholds for risk and gives companies incentives to stay below them — often by silencing dissent before anyone notices.
This is how censorship operates in the 21st century: not with bans, but with invisibility. And under the Online Safety Act, invisibility becomes a feature — not a bug.
The Online Safety Act claims to make the internet safer. But it does so by building systems that cannot distinguish between protection and persecution. It treats all risk as a technical problem, solvable by data and enforcement. In doing so, it ignores how discrimination works — not through obvious hatred, but through systemic exclusion, design bias, and algorithmic erasure.
If we want true online safety, we must start by asking: safe for whom?
Because a system that protects the majority at the expense of the vulnerable is not safe. It is simply more polite in how it discriminates.
Comparisons: How Other Jurisdictions Handle It
The United Kingdom has taken a bold and uncompromising stance with the Online Safety Act — but it is not alone. Across the globe, governments are grappling with the same digital dilemmas: how to protect citizens from online harm without trampling on their rights. While the UK has been quick to legislate and eager to enforce, other jurisdictions have taken different paths — some more cautious, some more aggressive, and some more nuanced.
Comparing these approaches isn’t just academic. It shows us what’s possible — and what’s dangerous — when states reach into the digital world to impose order.
The European Union: The Digital Services Act
If the UK’s Online Safety Act is a sledgehammer, the European Union’s Digital Services Act (DSA) is more of a scalpel — still forceful, but more carefully balanced between enforcement and rights protection.
The DSA requires large platforms to assess systemic risks — including disinformation, child safety, and mental health — but it explicitly incorporates the EU’s Charter of Fundamental Rights. This includes the right to privacy, freedom of expression, and non-discrimination. Platforms must demonstrate how they manage risk without infringing on those rights.
Notably, the DSA does not demand real-name registration or mandatory age verification across the board. Instead, it focuses on transparency: platforms must explain their content moderation practices, reveal how algorithms influence user experiences, and provide appeal mechanisms for takedowns.
There are problems, of course. The DSA still risks over-enforcement and leaves some content decisions to opaque “trusted flaggers.” But its structure offers greater procedural safeguards, and its foundation is legal pluralism — not moral paternalism.
The United States: The Fragmented Frontier
In the United States, attempts at digital regulation remain fractured. There is no federal equivalent to the Online Safety Act or the DSA. Instead, states have started to experiment with their own laws — and many of them mirror the UK’s worst instincts.
Take the Kids Online Safety Act (KOSA), a bipartisan federal proposal. Like the UK’s legislation, it focuses on protecting children from harmful content. But it has been sharply criticised by civil rights groups, LGBTQ+ organisations, and free speech advocates for granting too much power to state attorneys general to decide what content is harmful to minors.
In more conservative states like Texas and Florida, new laws demand platforms protect against “censorship” of conservative viewpoints, creating direct tension with content moderation obligations. Others require age verification to access adult content, which has led to the rise of invasive biometric tools or government-issued ID scans.
The result is a legal landscape defined by confusion, inconsistency, and litigation. But there is still one key legal bulwark: the First Amendment. While not invulnerable, it remains a powerful constraint on government overreach, ensuring that online regulation in the U.S. faces constitutional scrutiny in a way the UK law does not.
Australia: Centralised Power, Expanding Scope
Australia’s Online Safety Act 2021 predates the UK’s, but shares many of the same features: a centralised safety regulator (eSafety Commissioner), takedown powers, and obligations for platforms to protect users — especially children — from harm.
Where Australia diverges is in its growing use of direct censorship mechanisms. The eSafety Commissioner has the power to order the removal of content within 24 hours, with little opportunity for appeal. These takedown notices have already been issued for everything from violent videos to online abuse — but also for political content deemed “offensive” or “unlawful” by vague standards.
In 2024, the scope of these powers expanded to include misinformation and foreign interference, raising alarm over the lack of judicial oversight and the potential for abuse. Australia also promotes mandatory age verification and is experimenting with facial recognition for access to adult content — raising the same privacy alarms now echoing in the UK.
In practice, Australia demonstrates what happens when safety powers are centralised without clear democratic safeguards: regulatory discretion becomes the default, and user rights become optional.
Canada and New Zealand: Rights-First Hesitation
Canada has flirted with its own online harms legislation but has so far pulled back under public pressure. Early drafts of its proposed legislation were sharply criticised for lacking due process and overreaching into expression. Civil society pushed back, arguing that the proposal could be weaponised against marginalised communities, and the government paused its efforts to reconsider its scope.
New Zealand has adopted a more education-focused, harm-reduction approach — combining limited platform regulation with digital literacy programmes, non-punitive moderation schemes, and support services for young users. It has not yet moved toward the surveillance-heavy frameworks seen in the UK or Australia.
While neither Canada nor New Zealand has “solved” the problem, both demonstrate something the UK has largely overlooked: public trust and legitimacy are earned, not enforced. People are far more likely to comply with safety systems when they are transparent, participatory, and rights-respecting.
What These Comparisons Reveal
The UK likes to present itself as a global leader in online safety. But leadership is not just about speed or strength — it’s about wisdom. And in that sense, the UK's Online Safety Act reveals itself not as a pioneering model, but as a rushed compromise, one that prioritises enforcement over balance, surveillance over trust, and control over consent.
Other jurisdictions have shown that regulation can be done differently:
With built-in protections for rights and expression (EU)
With constitutional scrutiny and decentralisation (US)
With community-based models and restraint (NZ)
With cautious public consultation and legal revision (Canada)
What sets the UK apart is not just that it acts — but that it acts as if the harms of online speech are more dangerous than the harms of online repression.
Better Models: What Real Balance Looks Like
If the Online Safety Act teaches us anything, it’s that intentions are not enough. The desire to protect children, reduce harm, and hold platforms accountable is not controversial. In fact, it is broadly shared across the political spectrum, by activists, survivors, technologists, and even many of the platforms themselves. But protection without proportionality becomes paternalism. And safety without privacy becomes surveillance.
The challenge, then, is not whether to regulate the internet — it is how to do so without replicating the harms we seek to prevent.
There is no perfect formula, but there are models and principles that offer a more ethical, effective, and rights-respecting path forward. A future worth building is not one where we are safer because we are being watched — but one where we are safer because we are empowered.
Privacy-Preserving Age Assurance
One of the most urgent reforms needed in digital safety policy is in the area of age verification. The Online Safety Act leans heavily on vague requirements for “robust” age checks, without offering any technical standards or safeguards. In practice, this opens the door to mass identity collection, with companies demanding government IDs, biometric scans, or behavioural profiling to determine user age.
But it doesn’t have to be this way.
Technologists and privacy researchers have proposed — and in some cases implemented — systems that verify age without identifying users. These include:
Zero-knowledge proofs, where a user can prove they meet a certain age threshold without revealing their actual birthdate or identity.
Anonymous cryptographic tokens, which can be issued by a trusted third party and used to access age-gated content without creating a centralised user dossier.
On-device AI classifiers that never transmit personal data to the server, ensuring privacy remains local.
These tools are not speculative. They exist. What’s missing is political will — and regulatory clarity — to mandate their use over invasive alternatives. Privacy-preserving design should not be optional. It should be the default standard for all age-related controls.
Harm Reduction, Not Harm Erasure
The dominant logic of the Online Safety Act is that harmful content must be removed — ideally before it is ever seen. But in practice, much of what the Act labels as “harmful” is contextual, personal, and deeply subjective. Mental health discussions. Queer identity exploration. Support forums for survivors. These are not uniformly safe or unsafe — they are fragile spaces where support and danger often coexist.
Instead of erasing these spaces, regulation should be guided by a harm reduction model — one that asks how to make online environments less risky, not how to make them risk-free by force.
This could include:
Trusted moderation, not algorithmic suppression — empowering real, trained humans to oversee sensitive content with care and context.
Trigger warnings, content filters, and user-controlled tools, rather than blanket bans or automated takedowns.
Funding and visibility for peer-led support networks, instead of pushing vulnerable users toward clinical, state-approved resources they may not trust.
Harm cannot be engineered out of the internet. But it can be managed ethically, with the input of those most affected.
Democratising Enforcement
One of the gravest dangers in the current UK model is the concentration of enforcement power — in Ofcom, in large platforms, and in algorithmic tools with little transparency. A better model would redistribute this power, putting users and communities back at the centre of digital governance.
This means:
Independent appeals and review boards, where users can challenge content removals or algorithmic decisions.
Participatory policy development, where affected communities have a seat at the table when rules are written.
Real-time transparency dashboards, where platforms must show — in accessible language — how moderation decisions are made, and who makes them.
It also means limiting the discretion of regulators to act without oversight. Ofcom’s current powers include imposing secretive audits, demanding private user data, and enforcing content policy based on vague risk categories. A rights-respecting system would bind regulators to clear procedural standards, public accountability mechanisms, and judicial review.
Designing for Rights, Not Just Safety
Ultimately, the goal of online regulation must be to create platforms where people can flourish, not just survive. That means reorienting design away from pure risk management and toward affirmative digital rights.
We must demand:
Anonymity as a protected mode of participation, not a threat to be eliminated.
Freedom of expression, including for controversial, emotional, or marginalised voices — not just polite, sanitised content.
Digital literacy and user empowerment, so that people, especially young people, understand the tools they use, the data they generate, and the choices they have.
There is no universal algorithm for safety. But there can be a culture of informed consent, where people know how systems work and can meaningfully opt in — or out.
International Solidarity and Policy Alignment
As this guide has shown, other jurisdictions are experimenting with different models. The UK does not need to lead by force. It could lead by example — by choosing a framework that respects privacy, encourages transparency, and resists the temptation to overreach.
An aligned global standard could emphasise:
Cross-border privacy protections and data sovereignty.
Mutual recognition of rights-based safety mechanisms, rather than competing censorship regimes.
A shared commitment to keeping the internet open, accessible, and decentralised — even while tackling abuse and exploitation.
No country should become the global lab for normalising surveillance — and no democracy should be afraid to pause and reflect on whether it has gone too far.
True online safety is not achieved through control. It is achieved through trust, dignity, and shared responsibility. We do not need a panopticon to protect each other. We need the political courage to build systems that treat users not as problems to be managed, but as people — messy, vulnerable, and worthy of rights.
What Needs to Change in the Act
The Online Safety Act is not beyond saving. Beneath its invasive architecture and punitive edge lies a real opportunity — to create a safer, more accountable digital landscape. But to do that, we must strip it of its authoritarian instincts and rebuild it around rights, transparency, and user empowerment.
This section sets out the most urgent areas for reform. These are not minor tweaks or footnotes. They are structural correctives, without which the Act will continue to function as a tool of coercion rather than protection.
1. Ban Invasive Identity Verification
At present, the Act enables — and in some cases encourages — platforms to demand deeply personal user information in the name of age assurance or risk management. This includes government ID, biometric scans, or behavioural profiling.
This must stop.
The Act should be amended to:
Prohibit mandatory real-name registration as a condition of access to lawful content.
Mandate privacy-preserving verification options, such as zero-knowledge proofs or third-party token systems.
Require Ofcom to certify age assurance technologies not only for accuracy, but for privacy and data minimisation.
The right to access information — especially for vulnerable users — must not be conditional on surveillance.
2. Enshrine a Right to Anonymity and Pseudonymity
The internet has always been a space where people could explore identity, dissent, and vulnerability without revealing themselves. That right is now under threat.
The Act must affirm that:
Anonymous and pseudonymous participation is protected, especially in contexts involving health, identity, and political speech.
Platforms may not discriminate against unverified users in ways that restrict their visibility or basic participation, unless there is a demonstrable safety threat.
Any efforts to limit anonymity must pass a proportionality test, overseen by an independent rights body.
Anonymity is not a loophole. It is a shield — one that countless people depend on.
3. Limit Ofcom’s Enforcement Powers
Ofcom has been given extraordinary authority to demand data, levy fines, shut down platforms, and hold executives criminally liable. But it lacks a corresponding set of safeguards and accountability mechanisms.
The Act must be revised to:
Require judicial oversight for data access, platform takedowns, and criminal referrals.
Establish an independent oversight panel to review Ofcom’s regulatory decisions, especially those involving rights-sensitive content.
Create a clear and time-bound appeals process for platforms and users whose content or access is restricted.
Regulators should enforce the law — not interpret it unchecked.
4. Introduce Explicit Human Rights Tests
The current framework evaluates harm largely through platform risk assessments and regulator-defined categories. But it fails to weigh this against users’ legal rights.
To correct this imbalance, the Act must:
Incorporate a binding human rights impact assessment (HRIA) for all codes of practice and enforcement actions.
Require all content moderation tools, risk assessments, and algorithmic interventions to be assessed against the Human Rights Act 1998 and international standards like the UN Guiding Principles on Business and Human Rights.
Ensure that users whose rights are infringed — including to privacy, expression, and non-discrimination — have access to legal redress and remedy.
Safety must never override rights. It must be evaluated in relation to them.
5. Narrow the Definition of “Harmful” Content
The Act's most troubling ambiguity lies in its definition of “harmful but legal” content. This catch-all phrase allows for the restriction of speech that is distressing, controversial, or simply unpopular — even when it breaks no law.
This must be redefined.
The Act should:
Remove or sharply narrow provisions that regulate legal content based on subjective harm thresholds.
Focus enforcement on actual harm, supported by credible evidence — not speculative risk models.
Prohibit moderation practices that disproportionately impact marginalised or vulnerable communities, particularly in the absence of demonstrable harm.
A democracy cannot afford to govern speech by discomfort.
6. Protect Small Platforms and Open Source Projects
The Act's one-size-fits-all structure risks crushing small platforms, decentralised communities, and open-source services that cannot afford industrial-scale compliance.
To preserve a pluralistic internet, the Act must:
Exempt or simplify duties for non-profit, community-driven, and open-source platforms below a reasonable size threshold.
Provide public funding or regulatory guidance for rights-compliant moderation systems that are privacy-preserving and community-led.
Recognise that centralised surveillance tools are neither scalable nor appropriate for every digital environment.
We should not regulate the internet in a way that only corporate giants can survive.
7. Build Real Transparency and User Redress
The law rightly demands more of platforms — but offers users little in return. There is no meaningful path for individuals to challenge moderation, surveillance, or algorithmic injustice.
This must change.
The Act should:
Require platforms to offer clear and timely explanations for content removals, shadowbanning, or access restrictions.
Mandate a right to appeal and escalate moderation decisions to an independent ombudsman or digital rights body.
Guarantee data portability, access, and deletion rights, including for data collected under age assurance and safety systems.
Safety without transparency is just coercion in polite clothing.
This is not about weakening regulation. It is about strengthening its legitimacy. A safety law built on fear will always end up criminalising the very people it claims to protect. But a safety law built on rights, transparency, and shared governance can do something far more powerful: it can make the internet livable again — for everyone.
The Road Ahead: Advocacy, Resistance, Reform
The Online Safety Act is already law. Its surveillance apparatus is not hypothetical. Its enforcement is underway. Children’s risk assessments are due. Platforms are rewriting their policies. Ofcom is preparing new codes. In a matter of months, the legal and cultural norms of the UK internet will have changed — not gradually, but fundamentally.
But law is not static. It is a living instrument — capable of being resisted, reinterpreted, amended, and even repealed. What matters now is what we do with that reality. Because if we are to defend the internet as a space of freedom, experimentation, and dissent, we must act — collectively, relentlessly, and with clarity.
The Role of Civil Society: Watchdogs, Not Cheerleaders
Civil society must be more than a stakeholder invited into consultations. It must be a watchdog, tracking enforcement, exposing overreach, and amplifying the voices of those being silenced.
Digital rights groups, privacy organisations, child safety advocates, mental health workers, and minority-led coalitions all have a role to play. But they must refuse to be tokenised. Participating in policy is not the same as endorsing its outcomes. Critical distance must be maintained — and vocal dissent must remain part of the process.
We need:
Public oversight coalitions monitoring Ofcom’s use of its powers.
Legal support funds to back challenges against unjust moderation or surveillance.
Whistleblower protections for tech workers who reveal abuses inside enforcement systems.
Litigation and Legal Challenges
Parts of the Online Safety Act are ripe for legal contest. Provisions that undermine the right to privacy, suppress lawful speech, or disproportionately impact marginalised users could be challenged under:
The Human Rights Act 1998
The European Convention on Human Rights (ECHR)
UK constitutional principles of proportionality and due process
Lawyers, academics, and human rights advocates should begin now to prepare strategic litigation — not to delay regulation, but to civilise it. Every case brought before a tribunal or court is an opportunity to set precedent, clarify limits, and force transparency.
User Empowerment and Digital Literacy
No system can protect what its users do not understand. As safety tools and surveillance systems become embedded in everyday online life, the public needs new forms of digital literacy — not just about phishing scams or cyberbullying, but about how data is collected, how algorithms work, and how to protect yourself in a monitored environment.
This includes:
Education campaigns around privacy-preserving tools (VPNs, encrypted messaging, anonymous browsers).
Guidance on recognising when content is being deprioritised or censored.
Support for youth-led education movements, so that young people can navigate online life with agency, not fear.
Digital safety must begin with digital autonomy.
Building Alternative Platforms and Protocols
While regulatory reform is essential, we cannot rely solely on law to protect our digital spaces. We must also build alternatives — technologies and communities that bake in rights and trust at the design level.
This means:
Supporting open-source, decentralised platforms where users set the rules.
Creating privacy-first social media ecosystems that don’t rely on surveillance to function.
Funding and elevating community moderation models that replace automated enforcement with care and consent.
Reclaiming the internet means diversifying its architecture.
A Cultural Shift: From Control to Care
Ultimately, we must shift the narrative that underpins so much of the Online Safety Act: the idea that the internet is inherently dangerous, that users must be controlled, and that tech must be policed into submission.
We need a politics of digital care, not control. A framework that says:
Harm exists — but people are not problems to be managed.
Children deserve protection — and also autonomy, trust, and agency.
Platforms require accountability — but not at the cost of open society.
This is not naïve. It is visionary. And it is necessary. Because if we do not define a new vision of online safety — one rooted in justice, dignity, and rights — then others will define it for us. And they will define it in the image of power.
Simply Put
The Online Safety Act may have passed — but the fight for a free, fair, and safe internet is far from over. Every flawed system can be reformed. Every piece of legislation can be challenged. Every abusive structure can be dismantled.
Let this guide be not just a critique, but a starting point. For policy-makers brave enough to listen. For technologists bold enough to build better. And for users — all of us — who refuse to be watched into silence.
The internet is ours.
Let’s keep it that way.