When Everyone’s an Expert: What AI Can Learn from the Personal Trainer Industry

As AI adoption accelerates, expertise is increasingly “performed” rather than earned. By comparing AI’s current hype cycle with the long-standing lack of regulation in the personal trainer industry, this piece examines how unregulated expertise markets reward confidence over competence, normalise harm, and erode trust. The issue isn’t regulation for its own sake; it’s accountability before failure becomes infrastructure.

Contents

1. Introduction

There’s a moment in every unregulated profession where confidence becomes indistinguishable from competence.

Personal training hit that moment years ago. AI is hitting it now: harder, faster, and with far higher stakes.

When anyone can call themselves an expert, markets don’t self-correct. They reward whoever sounds most certain, not whoever understands the risks.

2. The Personal Trainer Problem

Personal training has, for decades, suffered from a fundamental regulatory weakness: anyone can call themselves a personal trainer. Certification standards vary wildly. Oversight is fragmented or cosmetic. Enforcement is minimal. Outcomes, meanwhile, are real and bodily.

To be clear, this does not mean the personal training industry is entirely unstructured. In the UK, for example, bodies such as CIMSPA and REPs provide voluntary standards, insurance requirements impose a baseline of qualification, and many gyms enforce minimum credentials for employment. The issue is not the absence of effort, but the absence of a binding, shared definition of competence that meaningfully constrains who can claim expertise.

The result is predictable:

  • A small number of highly competent professionals
  • A long tail of underqualified, overconfident practitioners
  • Clients harmed through bad advice, unsafe programmes, or false claims
  • Reputation damage that affects the entire profession, not just the worst actors

What’s important is not that regulation is absent, but what fills the vacuum instead: marketing, confidence, aesthetics, and anecdote. The people who sound convincing win. The people who are careful, nuanced, or honest about uncertainty often lose.

3. AI Is Following the Same Path But Faster

AI is now in its “anyone with a weekend and a LinkedIn account” phase.

We are surrounded by:

  • “AI strategists” with no grounding in systems, data, or risk
  • “40 years of AI experience” claims that collapse under even mild scrutiny
  • Tool-driven expertise mistaken for domain understanding
  • Confident generalisation where precision and context are essential

The parallels with PTs are striking, but the blast radius is much larger. Bad fitness advice might injure a knee or a back. Bad AI advice can:

  • Embed bias into decision-making systems
  • Expose organisations to regulatory, legal, and security risks
  • Automate failure at scale
  • Undermine trust in entire sectors

And just like PT clients, AI adopters are often non-experts themselves. They are buying confidence, not competence.

To be fair, AI is not standing still. Serious organisations are already investing in peer review, benchmarking, red-teaming, assurance frameworks, and formal evaluation. The problem is that these practices coexist with a far larger, louder market that remains unconstrained, and it is that market which most non-expert adopters encounter first.

I was recently told, with complete seriousness, that someone had “about forty years of experience in AI”.

This is not a throwaway exaggeration. It’s a signal. When a field lacks shared definitions of competence, chutzpah, even absurd claims can survive unchallenged, provided they’re delivered with enough confidence. It’s bad enough that I have to put up with this schlep in Cyber, never mind “emerging” fields as well.

4. What Lack of Regulation Actually Produces

This isn’t an argument for blanket regulation, licensing, or credential theatre. It’s an argument that expertise without accountability is not expertise at all: it’s branding.

The real problem isn’t merely the absence of rules, it’s the outcomes that unregulated expertise markets reliably produce:

  1. Confidence beats competence
    Without standards, confidence becomes the primary signal. This selects for charisma, not capability.
  2. Harm becomes invisible until it accumulates
    Individual failures look like edge cases. Systemic harm only becomes obvious after widespread adoption.
  3. Serious practitioners pay the price
    The most competent actors spend time undoing damage caused by others, while also being lumped in with them.
  4. Trust collapses unevenly
    Users don’t lose trust in “bad actors”; they lose trust in the field itself.

We’ve seen this movie before… in industries where injuries, failures, and reputational damage only became visible after long periods of normalised bad practice. The PT industry is still trying to recover the credibility it never formally protected.

5. The Critical Difference: AI Is Infrastructure

Here’s where the analogy stops being merely illustrative and becomes urgent.

AI is not a discretionary lifestyle service. It is rapidly becoming decision infrastructure, embedded in hiring, healthcare, finance, security, and governance. When expertise in such a domain is unbounded and unverified, the consequences are not personal mistakes; they are systemic failures.

And yet, the market signals remain the same: branding over rigour, claims over evidence, tools over understanding.

6. Regulation Isn’t the Point, Accountability Is

The lesson from personal training isn’t “regulate everything”. It’s this: fields that affect human outcomes need mechanisms that distinguish real competence from performative expertise.

That can include:

  • Clear competency frameworks
  • Traceable accountability for outcomes
  • Separation between tool use and professional judgement
  • Cultural norms that reward saying “I don’t know”

Absent these, the market will continue to reward whoever shouts loudest.

7. Closing Thought

When everyone is an expert, expertise becomes meaningless. When expertise becomes meaningless in a field like AI, risk doesn’t disappear: it becomes normalised.

We never properly solved this problem in personal training. People still get hurt.

The difference is that AI doesn’t just affect bodies, it shapes systems. And systems scale mistakes, remember them, and normalise them long after the original “expert” has moved on.