The Rise of AI–Cyber Policy Convergence: Who’s Leading the Discussion?

AI and cybersecurity are no longer separate conversations. In the UK, they’re becoming one strategic priority, with new leaders, risks, and regulatory battles emerging fast. Until recently, AI and cybersecurity lived in different corners of policy and funding. But that era is over. From deepfake fraud and LLM jailbreaks to AI-assisted vulnerability discovery, the UK now faces a landscape where cyber threats and AI systems are not just overlapping; they are entangled. And the convergence is reshaping national security strategies, tech standards, and regulatory structures. This article explores the organisations, thinkers, and working groups shaping the AI–cyber policy crossover in the UK, and how startups, researchers, and advisors can influence what comes next.

Contents

1. Why AI and Cyber Are Colliding

AI systems are:

  • Attack vectors (e.g. model manipulation, prompt injection)
  • Threat enablers (e.g. AI-written phishing, autonomous exploitation)
  • Security tools (e.g. anomaly detection, log analysis, threat prediction)
  • Resilience challenges (e.g. opaque systems, cascading failure, false confidence)

As a result, policymakers are being forced to build unified approaches to secure, regulate, and govern AI systems as part of national cyber strategy, not separately from it.

2. The Core UK Policy Makers & Shapers

DSIT (Department for Science, Innovation and Technology)

  • Leads the UK’s National AI Strategy and AI Safety Summit coordination
  • Also runs Cyber Local, CyberASAP, and Digital Security by Design
  • Driving convergence by funding projects at the AI–cyber frontier (e.g. AI Red Teaming, risk classification)

Key programmes to watch:

  • Frontier AI Taskforce
  • UKRI BridgeAI and Trustworthy Autonomous Systems (TAS) Hub
  • Secure AI framework discussions via consultations

NCSC (National Cyber Security Centre)

  • Publishing AI guidance tied to supply chain risk, tool assurance, and safe deployment
  • Works with DSIT and global partners to define AI-related security principles
  • Increasing focus on LLM safety, AI-as-threat-surface, and AI-enhanced defence

Centre for Data Ethics and Innovation (CDEI)

  • Leading UK work on responsible AI assurance and auditing
  • Working with NCSC, BSI, and UKRI on frameworks for trustworthy model development
  • Advisory input into regulation, especially on explainability, robustness, and bias

AI Safety Institute UK

  • New institution emerging from the AI Safety Summit
  • Focused on empirical testing, evaluation frameworks, and frontier model risk
  • Influences both domestic policy and international safety coordination with partners like the US and Singapore

3. Think Tanks and Ecosystem Voices

  • RUSI – Driving security- and geopolitics-informed AI regulation
  • Ada Lovelace Institute – Influencing rights-based AI and societal impact regulation
  • Centre for Long-Term Resilience (CLTR) – Mapping systemic risks at AI–cyber intersections
  • Alan Turing Institute – Leading academic research on trustworthy AI, formal methods, and AI safety

4. Industry and Standards Bodies Leading the Merge

  • BCS, IET, and CIISec – Integrating AI into skills frameworks and chartership
  • TechUK – Running joint working groups on AI safety, cyber assurance, and model governance
  • BSI (British Standards Institution) – Leading UK contributions to ISO/IEC 42001 (AI Management Systems) and AI risk vocabularies
  • OpenSSF, OWASP AI Exchange – Cross-pollinating secure software and AI-specific threats

5. Real-World Impact: Where Policy is Changing Practice

  • Cyber Runway and CyberASAP cohorts now include AI-centric tools and threats
  • Public procurement is starting to ask for safe and auditable AI (esp. in defence, healthcare, and education)
  • DSIT and Innovate UK grants increasingly focus on explainable AI, adversarial robustness, and secure-by-design principles
  • AI + cyber convergence is shaping incident response plans, compliance toolchains, and startup assurance expectations

6. How to Get Involved or Influence the Agenda

  • Join DSIT or TechUK working groups focused on AI assurance and AI–cyber safety
  • Contribute to consultations (e.g. AI Code of Practice, UK AI Regulation Framework)
  • Engage with TAS Hub, CyberASAP, or CDEI pilot projects
  • Speak at cross-sector events (CyberUK, CogX, AI Fringe, or RUSI briefings)
  • Offer lived insight, especially if you build or use secure AI systems at SME scale

Final Thoughts

Cybersecurity and AI are now one battlefield. And the UK is moving quickly to define the rules.

If you’re working in secure AI, cyber risk, or digital assurance, don’t wait for convergence to be complete.

Step in now, shape the frameworks, and help write the future.

Because the next wave of influence in UK tech won’t be cyber or AI, it will be both.

References

  1. Inside the UK Cyber Ecosystem: A Strategic Guide in 26 Parts
  2. The Insider’s Guide to Influencing Senior Tech and Cybersecurity Leaders in the UK
  3. The Rise of AI–Cyber Policy Convergence: Who’s Leading the Discussion?
  4. Resilience by Design: How UK Think Tanks and Standards Bodies Shape Security-by-Default
  5. Cyber Across US Government: Agencies, Frameworks, and Innovation Pathways