When Adoption Becomes The Goal, Risk Becomes Invisible By Design

This article examines how AI risk is obscured when organisations prioritise adoption over governance. Drawing on real-world examples, it argues that widespread AI usage is already endemic; but largely shallow, uncontrolled, and poorly understood. In regulated environments, optimising for uptake before addressing data lifecycle, verification, leakage, and accountability is not innovation, but a dangerous substitution of metrics for responsibility.

Contents

1. Introduction

This article follows on from a concrete failure of ChatGPT as a platform, a case where data integrity, support, and accountability broke down at the point they mattered most.

The issue examined here is broader and more worrying: the organisational patterns that make such failures not just possible, but inevitable.

When I raised concerns about the use of AI in regulated data platforms recently, the discussion did not centre on data lifecycle, validation, retention, or regulatory exposure.

There was no serious engagement with questions such as:

  • What data is entering these systems?
  • What data is being retained, and for how long?
  • How is output verified before it is acted upon?
  • How do we prevent regulated or sensitive data being exposed via conversational interfaces?

Instead, the dominant question in the room was this:

How do we get more people to use AI?

Adoption metrics. Exemplars. Internal evangelism.

This is not governance. It is growth thinking smuggled into a risk domain.

In regulated environments, the primary question is not how many people are using a system. It is whether the system can be used safely, correctly, and accountably at all.

Optimising for usage before you understand data flow, control boundaries, or failure modes is not innovation. It is negligence with a dashboard.

2. Everyone Is Using AI, That Is Not The Same Thing As Using It Well

One of the most common objections raised whenever AI risk or governance is discussed is the assertion that

“people don’t really know what AI is yet.”

This is usually delivered as if it were a profound insight. It is not.

People have never needed to understand how a technology works internally in order to use it. They have only ever needed it to produce outcomes they care about.

Most people do not understand how a television works. They do not understand signal processing, circuitry, or display technology. They do not need to. The device presents a picture, and that is sufficient.

Treating “people don’t know what AI is” as a blocker is therefore a category error. It mistakes conceptual understanding for practical adoption, and introspection for progress.

Once you look at the world through that lens, the current state of AI adoption becomes impossible to ignore.

I was reminded of this recently in a completely mundane setting. I was at the gym, chatting to the guy behind the counter, when he mentioned, almost offhand, that he was writing books using AI. Not drafts. Not experiments. Finished books. Several of them. Published on Amazon Marketplace. Available for download.

It was genuinely interesting, not least because it illustrates the reality we seem determined to ignore: AI is already everywhere, and everyone is already using it.

People are not waiting for permission. They are not waiting for governance frameworks. They are not waiting for internal exemplars.

They are just using the tools.

And they don’t need to understand the internals any more than my Dad needed to understand how a television worked to enjoy watching it.

When I was a kid, a television in our house broke. I took it apart, found dry joints on the board, resoldered them, and put it back together. It worked. The old man was astonished. He didn’t need to know how I’d done it, he couldn’t believe I had, he just wanted the picture back, but in a framework he felt comfortable with.

That is how humans interact with technology. They care about outcomes, not mechanisms, and they like to ignore the messy details.

AI is no different.

3. AI Is Already Endemic, Denial Does Not Change That

We have been here before.

In the early days of the internet, a familiar refrain circulated in executive and technical circles alike: “The internet doesn’t really change anything” (Andy Grove, CEO of Intel, c2001).

What followed, of course, was the realisation that the internet did not change one thing in particular, it changed everything. It created an entirely new playing field on which existing assumptions about distribution, scale, access, and control no longer held.

AI occupies the same position today.

It is not a feature.
It is not an add-on.
It is a substrate shift.

Buckminster Fuller described this phenomenon as ephemeralisation: doing more and more with less and less, while hiding ever-increasing complexity beneath new abstractions. Alvin Toffler made a related point from another direction, complex systems do not disappear; they are concealed behind simpler interfaces, while we suffer from “information overload”.

AI accelerates this process dramatically. It lowers the apparent cost of action, compresses expertise into interfaces, and obscures the machinery underneath.

That is why debates about whether AI is “really being used yet” miss the point entirely.

AI is already embedded in:

  • office tools,
  • developer environments,
  • search,
  • document systems,
  • communications platforms,
  • and corporate stacks via Microsoft Copilot and its cousins.

So when someone says, “We’re not really using AI yet”, what they usually mean is:

“We are not using AI deliberately”.

That distinction matters.

Because whether or not an organisation believes it has “adopted AI”, AI is already being used, informally, unevenly, and without guardrails.

Which brings us back to the central mistake.

4. Usage Is Not Capability, And Adoption Is Not Competence

There is a world of difference between:

  • more people using AI, and
  • an organisation using AI intelligently.

Most people use AI the way they use a web browser:
they ask questions, reformat text, summarise things, and move on.

That is not wrong, but it is shallow.

My own use of AI tends to look different. I don’t treat it as a vending machine for answers. I use it to build things, reason through real problems, explore system behaviour, and test ideas. There is collaboration there, not just consumption.

Most corporate “AI adoption” programmes do not make this distinction. They collapse everything into one metric: usage.

But usage tells you almost nothing about:

  • quality of outputs,
  • correctness,
  • risk,
  • or impact.

It just tells you that the tool is being touched.

I am aware that these observations are grounded in direct experience rather than survey data. That is deliberate. The patterns described here are not controversial in regulated environments; they are widely recognised, quietly discussed, and rarely surfaced publicly. What is unusual is not the behaviour, but the willingness to name it plainly.

5. GIGO To The Max… And Then Things Get Worse

At a basic level, AI systems still obey a very old rule: garbage in, garbage out.

If the prompt is poor, the output will be poor. That is before we even get to:

  • hallucinations,
  • confident fabrication,
  • subtle errors,
  • or inconsistent reasoning.

Beyond that, serious use introduces much harder problems:

  • How do you triangulate outputs?
  • How do you standardise responses?
  • How do you verify correctness?
  • How do you detect drift?

These are not theoretical concerns. They are operational ones.

And none of them are addressed by simply increasing adoption.

6. The Real Problem: Data Lifecycle In Regulated Environments

Once you place AI inside a regulated data environment, three hard questions appear immediately.

6.1 Data Going In

How do you prevent bad, sensitive, or regulated data from entering systems where it does not belong?

What controls exist at the boundary?
Who is responsible for enforcement?
How is misuse detected?

6.2 Data Coming Out

How do you ensure that AI outputs do not leak regulated data?

How do you prevent inference attacks, reconstruction, or accidental disclosure via conversational interfaces?

6.3 Autonomous Or Semi-Autonomous Agents

If AI agents are operating on data in real time:

  • how are they supervised?
  • how are their actions constrained?
  • how do you prove they behaved correctly?

These are lifecycle questions. Governance questions. Accountability questions.

They are far more important than whether you have enough exemplars in your internal newsletter.

7. This Is Bigger Than The Data Layer, And That Is The Point

Where organisations do take this seriously, they invert the sequence: governance first, controls before rollout, verification before scale. Those cases exist, and they underline that the failures described here are not technical inevitabilities, but organisational decisions.

AI is a pervasive organisational capability problem that touches every layer, and the data layer is just where the failure becomes obvious first.

Focusing exclusively on data in, data out, and autonomous agents is tempting because those risks are concrete, auditable, and familiar. They map neatly onto existing control frameworks. They feel governable.

But they are only the most visible part of a much wider surface area.

AI now permeates organisations horizontally as well as vertically. It appears in ad hoc usage, decision support, analysis, documentation, communication, prediction, and recall, often informally, often invisibly, and almost always faster than governance can respond.

Staff paste internal documents into chat interfaces to summarise, rephrase, or “sense-check” them. Meeting notes are condensed. Emails are rewritten. Policies are clarified. Reports are drafted. Context is inferred. None of this looks like a formal data pipeline, yet all of it can involve regulated, confidential, or personal data.

AI is also increasingly used as a cognitive prosthetic, a second opinion on decisions, risks, or interpretations. At that point it is no longer merely transforming information; it is shaping judgement. That raises uncomfortable questions about accountability, traceability, and explainability when decisions are challenged.

Prediction and forward-looking analysis compound the problem. Forecasts, scenarios, and trend assessments are generated with increasing confidence and decreasing friction. Errors here do not simply persist, they propagate.

Even documentation itself is no longer neutral. AI drafts narratives, summaries, board packs, and management reports. When those narratives are incomplete, biased, or wrong, the question of authorship and responsibility becomes slippery very quickly.

Conversational interfaces add another layer of pressure. Copilots and chatbots sit in front of regulated systems, changing the threat model entirely. Controls designed for structured access are bypassed by natural language. Context bleeds. Boundaries blur.

Finally, AI is quietly becoming part of organisational memory, a substitute for institutional knowledge, a retrieval layer over everything else. But memory without ownership, retention rules, or deletion guarantees is not an asset. It is a liability waiting to surface.

This is why treating AI governance as a narrow data problem understates the risk.

AI does not just touch your data.
It touches your decisions, your narratives, your memory, and your accountability structures.

And in regulated environments, those are precisely the places where failures tend to accumulate silently, until they do not.

8. Conclusion: The Pattern Is The Problem

What worries me is not that organisations are experimenting with AI.

What worries me is that many of them are optimising for visibility and momentum before they understand risk and control.

They are measuring the wrong thing first.

And in regulated industries, that is not enthusiasm, it is a failure of duty.

AI is impressive.
AI is powerful.
AI is already everywhere.

But without a serious understanding of data lifecycle, control boundaries, and accountability, increasing adoption does not make you more advanced.

It just makes your exposure larger.

Act accordingly.

9. Coda: We Will Not “Figure This Out”

The comfort of thinking this is still early

It is tempting to believe that we are still at an early stage, that what we are seeing now represents the main shape of the problem, and that with enough frameworks, controls, or maturity, we will eventually “figure AI out”.

That belief is comforting. It is also unlikely to be true.

The visible failures are not the whole system

What is visible today, data leakage, weak controls, shallow adoption, governance gaps, is the part of the problem we can already see. It is the tip of the iceberg. The deeper effects are harder to observe precisely because they are diffused across everyday behaviour, informal use, and unexamined reliance.

Ubiquity without deliberate adoption

AI is not being adopted through a single, deliberate decision. It is diffusing through convenience. Through defaults. Through embedded tooling. Through people simply trying to get their work done. No one decided for it to be everywhere. It simply became so.

A moving substrate, not a stable system

In that sense, AI is no longer a system you deploy. It is a substrate you build on, and that substrate is still moving. The complexity has not gone away; it has been concealed behind increasingly fluent interfaces. Capability has been compressed. Friction has been removed. And with that removal of friction comes a loss of visibility into how decisions are formed, justified, and defended.

The normalisation of the unexamined

This is made more dangerous by normalisation. Once behaviour becomes routine, it stops attracting scrutiny. AI use is already becoming boring. It is already backgrounded. And boring things are rarely governed well.

There will be no moment of closure

There will be no moment when this stabilises into something we can neatly document and sign off. No final framework. No completed rollout. No point at which we declare the problem solved.

What there will be instead is an ongoing negotiation with a capability that is everywhere, unevenly understood, and increasingly taken for granted.

The real question

The question, then, is not whether organisations will use AI.

They already are.

The question is whether they are prepared to accept responsibility for what that use quietly changes, in how decisions are made, how knowledge is stored, how narratives are constructed, and how accountability is ultimately enforced.

Because by the time those effects are obvious, it will be far too late to pretend we were still at the beginning.