The Age-Gated Internet Revisited: Identity, Trust and the Architecture of Control

This article responds to thirty-two questions posed in response to my earlier piece, “The Age-Gated Internet: Child Safety, Identity Infrastructure, and the Not So Quiet Re-Architecting of the Web”, where I explored how age verification and identity systems are beginning to reshape the internet. It examines the assumptions behind these developments and situates them within a broader architectural shift.

Executive Summary

This article examines the emergence of identity as a foundational layer of the modern internet. While often framed as a response to online harms, particularly in the context of child safety, identity systems function more broadly as mechanisms for attribution, liability distribution, and behavioural visibility. The pressures driving their adoption are real, but the systems being constructed extend far beyond their stated purpose and begin to reshape the underlying properties of the system itself.

The analysis groups thirty-two core challenges into six domains: governance and liability, trust and institutional legitimacy, surveillance and control, the nature of identity, unintended consequences, and the impact on freedom and anonymity. Across these domains, a consistent pattern emerges. Identity systems do not eliminate harm; they reallocate responsibility, increase system visibility, and expand institutional control. They introduce new risks, concentrate power, and create asymmetries between individuals, the institutions that govern them, and the systems they participate in.

Once deployed, these systems tend to persist and expand. What begins as a targeted intervention evolves into a generalised infrastructure. The result is a structural shift from episodic, anonymous interaction to continuous identification. The central question is not whether identity should exist, but whether its boundaries can be meaningfully defined and constrained before they become embedded and difficult to reverse.

Acknowledgements and Contributions

This article has been materially shaped by the contributions of several individuals whose expertise and perspectives have significantly strengthened the analysis.

Simon Freeman kindly contributed the original set of thirty-two questions that form the backbone of this piece. Working closely with the Cabinet Office on identity and digital infrastructure, and with deep experience in national-scale identity systems, he offers observations that cut directly to the operational realities, trade-offs, and unintended consequences of identity implementation. Many of the tensions explored here originate in his framing of the problem.

John Caswell provided a critical strategic perspective on the second-order effects of identity infrastructure, particularly its impact on behaviour and cognition, and on the conditions under which individuals explore and participate online. His insight that identity-gated systems risk conditioning future generations to interact on a permission-based basis is reflected in the analysis of participation and anonymity.

This analysis aligns with my broader work on societal transformation and digital systems. In my writing on societal evolution, I explore how underlying social structures are shifting as systems become more integrated and deterministic. In parallel, my work on the “Asymmetric Integration Model” examines how social media platforms evolve by recruiting humans as the empathy layer within increasingly automated systems, making them more adhesive and behaviourally influential. Identity infrastructure should be understood within this wider pattern.

The original article was also written with Ian Dunmore in mind, whose ongoing curiosity, challenge, and engagement continue to act as a catalyst for thinking through complex systems clearly and without simplification.

Any errors, interpretations, or conclusions remain my own.

Contents

1. Introduction: Identity, Trust, and the Quiet Shift from Safety to Control

I am grateful that, following my earlier article, “The Age-Gated Internet: Child Safety, Identity Infrastructure, and the Not So Quiet Re-Architecting of the Web“, Simon Freeman, working with the Cabinet Office on identity and digital infrastructure, with experience spanning government identity systems, digital architecture, and large-scale implementation, including the design of the Government Gateway, took the time to respond.

Rather than offering a simple reaction, he posed thirty-two questions and observations. They were not superficial. They cut to the core of identity, trust, enforcement, and the architecture of the systems now being built.

This article is a response to those questions.

Because taken together, they do not simply critique age verification. They expose something more fundamental.

This original article was written out of frustration with the quality of the debate. Too often, it collapses into either technocratic certainty or conspiracy. The intention here is simply to analyse what is actually being built, because without that, there is no way to understand where it leads.

2. The Thirty-Two Questions

What follows is a structured rendering of those points, with examples where they illuminate the argument. Many of these observations align with current regulatory developments, including the UK’s Online Safety framework and parallel efforts in the EU to introduce stronger forms of age assurance and digital identity.

  1. Laws already exist. Children cannot legally contract, yet they access platforms. The issue is not the law but enforcement, and enforcement has been outsourced to a chain of regulators and third parties. Each layer focuses on liability, not harm prevention.
  2. Privacy and secrecy are not the same, but cannot be cleanly separated. One only knows whether privacy protects something legitimate or illegitimate after the fact.
  3. Identity is a matter of trust. Trust in institutions is weakening, and without it, identity systems become suspect.
  4. The “slippery slope” once dismissed as paranoia now appears plausible in a world where accountability mechanisms are visibly eroding.
  5. With sufficient identity data, sensors, and integration, total surveillance becomes technically possible. For example, linking vehicle telemetry, number plate recognition, and identity enables continuous monitoring rather than post-incident enforcement.
  6. Systems risk inconveniencing the majority while failing to prevent determined bad actors. The burden is universal, the benefit selective.
  7. Stronger identity systems increase the reward for compromise. If identity is harder to fake, it becomes more valuable to steal.
  8. Once implemented, strong identity systems cannot easily be undone.
  9. Centralised identity creates control risk; decentralised identity creates duplication and inconsistency. There is no stable equilibrium.
  10. Reducing individual freedom as a means of preventing crime is not demonstrably effective. We accept some level of risk in exchange for freedom.
  11. Identity is ill-defined. It is contextual. A person is not a single role or attribute set, yet systems treat identity as fixed.
  12. In practice, systems want uniqueness, not identity. For many use cases, proving a single attribute, such as being over 18, should be sufficient.
  13. Identification today is episodic. The future points towards continuous identification through integrated systems.
  14. Government systems historically joined identifiers across departments but stopped short of binding them fully to the individual human.
  15. Identifying children online may make them more visible to those who would target them.
  16. Who has the right to demand identity, and under what conditions, is largely undefined and often buried in opaque terms.
  17. Physical identity checks are transient. Digital identity is captured, stored, and reused. The difference is structural, not procedural.
  18. Once identity data is collected, its use expands. For example, providing protected characteristics in job applications raises questions about how that data is actually used.
  19. Identity shapes and filters what individuals see. It becomes difficult to detect whether content has been filtered or altered.
  20. Refusing identity may lead to exclusion. Participation becomes conditional.
  21. Identity offers little direct benefit to the individual. It allows one to do what was already possible, but with more friction.
  22. Attribute-based verification is preferable, but requires strict limits on data use and retention.
  23. Identity attributes are not static. Many are fluid or context-dependent, making verification complex.
  24. Delegating identity to large corporations introduces misaligned incentives and liability concerns.
  25. Binding identity to a human requires intrusive mechanisms. Biometrics may be passive, as with facial recognition, removing consent.
  26. The core questions of identity remain unresolved despite decades of work. Technology has advanced faster than governance.
  27. Anonymity is a foundational property of society. It allows individuals to act without prior permission.
  28. The public assumes identity systems will benefit someone, but not them.
  29. Access to justice is limited. When identity systems fail or are misused, individuals have little recourse.
  30. The costs of fraud and failure are borne by society. Accountability is diffuse.
  31. Policy decisions are often driven by the need to be seen to act, rather than long-term effectiveness.
  32. Strong identity systems risk eliminating anonymity entirely.

3. The Six Buckets of Identity Questions

These observations, originally articulated by Simon Freeman, provide a structured lens for examining the system. Taken at face value, these appear as discrete observations. Viewed collectively, they cluster into six underlying domains, each describing a different failure mode or tension within identity systems, as:

Each is explored below.

3.1 Governance, Law, and Liability

(Points: 1, 6, 30, 31)

Freeman’s observation:

At the foundation is a misalignment between law, enforcement, and responsibility.

Response:

  • Laws exist, but cannot be enforced effectively at scale
  • Responsibility is pushed from government to regulators, to platforms, and ultimately to users
  • Systems optimise for demonstrating compliance rather than preventing harm
  • Costs of failure are distributed across society
  • Policy is often driven by the need to be seen to act rather than to be effective

This is not a system designed to eliminate harm. It is a system designed to manage liability.

3.2 Trust, Power, and Institutional Legitimacy

(Points: 3, 4, 29)

Freeman’s observation:

Identity systems depend on trust, but trust is weakening.

Response:

  • Citizens increasingly question whose interests institutions serve
  • Accountability mechanisms appear less effective than assumed
  • Access to justice is limited or impractical

In this context, identity is not neutral. It becomes an extension of institutional power.

Without trust, identity shifts from being a convenience to being a form of exposure.

It is also the case that low-trust environments can increase demand for identity systems. Where anonymity enables abuse, fraud, or manipulation, there is a corresponding pressure for stronger forms of verification. This creates a tension: the same lack of trust that makes identity systems suspect can also be used to justify their expansion.

3.3 Surveillance and Control

(Points: 5, 10, 13, 25)

Freeman’s observation:

When identity is combined with data and automation, it enables a qualitatively different form of system behaviour.

Response:

  • Monitoring shifts from episodic to continuous
  • AI enables real-time analysis across large datasets
  • Behaviour can be observed, predicted, and acted upon
  • Biometrics allow passive identification without consent

The result is not simply better enforcement, but the emergence of persistent behavioural visibility.

3.4 The Nature of Identity Itself

(Points: 11, 12, 14, 23)

Freeman’s observation:

A deeper issue sits beneath the implementation: identity is not a stable concept.

Response:

  • Identity is contextual and role-dependent
  • Most use cases require only minimal attributes
  • Binding digital identity to a real human is non-trivial
  • Many identity attributes are fluid, not fixed

Yet systems tend to treat identity as singular and persistent.

This mismatch between reality and implementation introduces systemic fragility.

Alternative approaches exist in principle, such as attribute-based verification and selective disclosure, including privacy-preserving techniques like zero-knowledge proofs. However, these approaches face structural barriers: they are harder to monetise, less aligned with regulatory preferences for strong attribution, and difficult to integrate across fragmented systems. As a result, they remain technically viable but institutionally marginal.

Even privacy-preserving approaches introduce new risks, including key management failures, coercion, and metadata leakage, reinforcing the idea that there is no stable or risk-free identity model.

3.5 Risk, Harm, and Unintended Consequences

(Points: 2, 7, 8, 15, 24)

Freeman’s observation:

Efforts to strengthen systems introduce new risks.

Response:

  • Privacy and secrecy cannot be cleanly separated
  • Stronger systems increase the incentive to attack them
  • Centralisation increases the impact of failure
  • Identifying vulnerable groups may increase their exposure
  • Delegating identity to corporations introduces new risks

These are not edge cases. They are structural consequences.

3.6 Freedom, Anonymity, and Social Structure

(Points: 9, 16–22, 26–28, 32)

Freeman’s observation:

The largest cluster relates to the societal impact of identity systems.

Response:

  • Anonymity is a foundational property of open societies
  • Identity changes who can participate, and on what terms
  • Power lies in who can demand identity and why
  • Identity data is captured, stored, and reused
  • Systems expand beyond their original purpose
  • Individuals gain little direct benefit
  • Technology is advancing faster than governance
  • Identity systems are difficult to reverse once established

Taken together, these points describe a shift in the relationship between the individual and the system itself.

4. Why This Matters

This structure does something important.

It shows that the debate is not about a single policy decision, such as age verification. It is about a set of interacting dynamics:

  • Liability systems replacing enforcement
  • Low trust interacting with high power
  • Identity enabling surveillance
  • An unstable concept treated as fixed
  • Risk redistributed rather than removed
  • Freedom is increasingly reshaped through infrastructure

Once seen this way, the argument becomes clearer.

We are not implementing a feature.

We are constructing a layer.

And that layer has properties that extend far beyond the problem it is meant to solve.

And that layer does not eliminate harm; it reallocates responsibility, increases visibility, and expands control.

5. What These Questions Reveal

At a conceptual level, these questions resolve into a single underlying claim.

Taken individually, these points are observations. Taken together, they form a coherent model of how identity systems actually behave, and they suggest that identity, as currently being implemented, is not primarily a tool for safety but a system for attribution, liability distribution, behavioural visibility, and control.

This is the misalignment at the centre of the current debate. We are told a story about protection, but what is being built is infrastructure. Identity systems do not eliminate harm; they reallocate responsibility, increase visibility, and expand control. In effect, identity systems operate as a chain: identity enables attribution, attribution enables liability distribution, and persistence enables visibility and control.

Taken together, these questions converge on a single conclusion: that identity systems, as currently being implemented, do not resolve the underlying problems they are presented as addressing. Instead, they reshape how those problems are managed, who is responsible for them, and how individuals are observed and controlled within the system.

Once capabilities are introduced, their evolution is rarely transparent or tightly bound. Control does not necessarily expand through explicit political intent, but through incremental administrative and operational decisions over time.

This infrastructure is not being developed in response to these questions, but largely in parallel with them. In practice, systems are built first, and their implications are addressed retrospectively. This pattern is familiar across fast-moving technology domains, where capability precedes governance. The result is that many of the risks identified here are not designed outcomes, but emergent properties introduced through the back door.

This is not a failure of intent, but a failure of sequencing: capability first, governance second.

While governance mechanisms such as oversight bodies, data minimisation requirements, and regulatory constraints exist in theory, their effectiveness depends on enforcement, institutional incentives, and long-term political will. In practice, these mechanisms tend to follow, rather than shape, system design. The question is not whether governance is possible, but whether it can operate at the same speed and scale as the systems it seeks to constrain.

6. From Enforcement Failure to Liability Distribution

It is important to acknowledge that the status quo is not functioning well. Underage access to platforms is widespread, harmful content is not consistently contained, and enforcement mechanisms have struggled to operate at scale. These pressures are real, and they are a primary driver behind the move towards identity-based controls. They reflect genuine societal demand for intervention, not simply regulatory overreach.

In some domains, this failure is not marginal but systemic. The scale of issues such as child exploitation, coordinated fraud, and large-scale manipulation has outpaced traditional enforcement models. In these contexts, weak or absent identity signals do not simply preserve freedom; they can materially enable harm at scale.

This is evident in current enforcement approaches, where platforms are required to implement “highly effective” age assurance, while responsibility for outcomes remains diffuse among regulators, providers, and users.

The starting point is straightforward. Harm exists. Laws exist. Enforcement fails at scale. The response is not to improve enforcement directly, but to substitute it with identity-based attribution, redistributing responsibility across the system from government to regulator, regulator to platform, and ultimately to the user.

Each layer must demonstrate that it has done enough, and identity enables this. It creates a record, allows attribution, and provides a defence. It does not prevent harm. It makes harm attributable, and in doing so, clarifies where liability sits, often flowing downwards towards those least able to absorb it.

7. From Identity to Continuous Visibility

Once identity is persistent, it does not remain isolated. It connects to devices, payments, behaviour, and movement.

The example of vehicle tracking illustrates the shift clearly. Historically, enforcement followed an incident. With integrated identity and telemetry, monitoring can be continuous. Behaviour can be observed, modelled, and acted upon in real time.

A more immediate example is the integration of identity into commercial platforms. Sellers on marketplaces such as eBay are now required to provide National Insurance numbers once activity thresholds are reached, with transaction data automatically reported to HMRC. This represents a shift from investigation based on suspicion to continuous reporting based on the assumption of potential liability.

The distinction is subtle but important. Rather than identifying wrongdoing, the system assumes the possibility of it and introduces persistent visibility as a default condition. What was once episodic oversight becomes embedded surveillance, enabled through identity linkage.

The same applies in public space. CCTV once recorded passively. Now, with automated analysis, it enables continuous tracking across environments. What was once exceptional becomes normal.

This is not a marginal enhancement. It is a change in the nature of the system.

8. The Instability of Identity

At the same time, the concept being operationalised is unstable. Identity is not a fixed object. It is a collection of attributes that vary by context, and most interactions require very little information.

Yet systems tend towards aggregation. An age check becomes an identity check. An identity check becomes a persistent profile. That profile becomes reusable across contexts.

This expansion is not explicitly designed. It emerges from the utility of the data once collected.

9. Privacy, Secrecy, and the Limits of Control

Attempts to separate privacy from secrecy run into a fundamental problem. One cannot determine whether concealed behaviour is legitimate or harmful without inspecting it, and by the time inspection occurs, privacy has already been compromised.

When combined with identity, this also enables differential experiences of the same system, where what is visible, prioritised, or suppressed may vary by individual without transparency or recourse.

This creates a structural tension. Systems that aim to eliminate misuse must intrude, while systems that preserve privacy must tolerate some level of misuse.

There is no clean resolution.

10. Adversarial Dynamics and Unintended Outcomes

Strengthening identity does not remove risk. It reshapes it. Harder-to-forge identities become more valuable targets, centralised systems increase the impact of compromise, and identified groups may become more visible to those who would exploit them.

The example of identifying children is particularly stark. A system intended to protect may, in certain configurations, make those individuals easier to find.

These are not failures of implementation. They are properties of the system.

11. The Quiet Expansion of Scope

Identity systems rarely remain confined to their initial purpose. Data collected for one reason becomes useful for others, whether for enforcement, analysis, or optimisation.

The distinction between inspection and capture is critical. A physical identity check is transient. A digital identity check is persistent, stored, linked, and reused. Over time, this changes the system from one that verifies to one that accumulates.

12. Anonymity and Participation

Anonymity is often framed as a vulnerability, but it is also a condition of participation. It allows individuals to engage without prior permission, explore, speak, and test ideas without immediate attribution.

If identity becomes a prerequisite for participation, anonymity becomes conditional, and the system itself changes.

This reflects a point raised by John Caswell, who characterises this shift as a move towards permission-based interaction as a baseline condition of participation. Over time, participation shifts from exploration to optimisation within perceived constraints.

This dynamic is already visible in emerging practices where access or participation is conditioned not only on identity, but on expressed views. For example, the use of social media activity in determining eligibility for entry into certain jurisdictions introduces a feedback loop between identity and expression. Over time, the effect is not only enforcement, but self-regulation. Individuals begin to moderate their behaviour not in response to direct control, but in anticipation of potential consequences.

13. The Asymmetry of Value

A consistent pattern emerges. Individuals provide data, accept friction, and assume risk, while institutions gain visibility, control, and reduced liability.

The benefits are concentrated. The costs are distributed. This asymmetry explains much of the resistance to identity systems, even when they are presented as beneficial.

14. Irreversibility

Perhaps the most important characteristic of identity infrastructure is that it is difficult to remove once established. Dependencies form; services rely on them; alternatives diminish.

Decisions made in response to immediate concerns become structural constraints. A system can move from weak identity to strong identity, but the reverse is far more difficult.

15. Why This Article Exists

It is worth being explicit about intent.

The original article was not written to advocate a position, nor to argue from ideology. It was written because the current discourse around identity and age verification is often shallow, reactive, or framed in extremes, oscillating between technocratic certainty and conspiracy.

Neither is useful.

The purpose here was to analyse what is actually being built, not what is claimed. To examine the system as it exists, to trace its internal logic, and to understand where that logic leads if left unconstrained.

There is also a more basic motivation. If systems are not analysed, they cannot be understood. If they are not understood, they cannot be shaped. Analysis is the map by which we navigate complex systems, particularly those that emerge incrementally and without central design.

In that sense, this is not a critique of any single policy or decision. It is an attempt to surface the structure that sits beneath them.

Because once that structure becomes visible, the question is no longer whether individual measures are justified, but whether the system they collectively produce is one we intended to build.

16. Conclusion: The Question That Remains

None of this negates the need to address harm online, but it does change the framing. Identity is not a neutral mechanism. It is a foundational layer that shapes how systems behave. The same infrastructure that enables safety also enables surveillance. The same mechanisms that allow accountability also allow control.

The long-term consequence is not only structural but also cognitive. Systems that require identity for participation begin to shape behaviour upstream. Exploration becomes conditional, interaction becomes permissioned, and the space for unstructured discovery narrows. Over time, this changes not only how systems operate, but how individuals think and engage within them.

The question is not whether identity will play a role. It is whether its boundaries are defined clearly enough, early enough, and enforced strongly enough to prevent outcomes that are neither intended nor easily reversed. Because if they are not, the system will still evolve, just not necessarily in the direction we believe we are choosing it to go.

Any viable approach will require strict constraints, including purpose limitation, minimal data retention, independent oversight, and built-in reversibility, before systems become entrenched. Otherwise, the risk is not only that identity systems change what we do, but that, over time, they change what we are willing to think, say, and explore.