The United States is arguably the most influential force in global cybersecurity, but its governance model is sprawling, federal, and often opaque to outsiders. Responsibility is distributed across military, civilian, and intelligence agencies—each with their own authorities, funding mechanisms, and strategic priorities.
Continue readingTag Archives: AI Safety

Cyber Across European Governments: Key Bodies, Funding, and Coordination
The European cybersecurity landscape is layered, fragmented, and fast-evolving. Unlike the centralised approaches of some governments, the EU’s model of collective sovereignty means cybersecurity is coordinated, rather than controlled by Brussels. National governments still manage their defence and digital sovereignty, but major funding, regulation, and cross-border frameworks increasingly come from the EU level.
Continue reading
Cyber Across UK Government: Departments, Programmes, and Policy Players
The definitive guide to who shapes cyber policy in Whitehall, and how to work with them.
Continue reading
Inside the UK Cyber Ecosystem: A Strategic Guide in 26 Parts
An extensive guide mapping the networks, policy engines, commercial power bases, and future-shapers of British cybersecurity.
Continue reading
The Insider’s Guide to Influencing Senior Tech and Cybersecurity Leaders in the UK
Influencing senior leaders in cybersecurity and technology is no small task, especially in the UK, where credibility, networks, and standards carry immense weight. Whether you’re a startup founder, a scale-up CISO, or a policy influencer, knowing where the key conversations happen (and who shapes them) can make the difference between being heard and being ignored.
Continue reading
The Risks of Self-Hosting DeepSeek: Ethical Controls, Criminal Facilitation, and Manipulative Potential
Self-hosting advanced AI models like DeepSeek grant unparalleled control but poses severe risks if ethical constraints are removed. With relatively simple modifications, users can disable safeguards, enabling AI to assist in cybercrime, fraud, terrorism, and psychological manipulation. Such models could automate hacking, facilitate gaslighting, and fuel disinformation campaigns. The open-source AI community must balance innovation with security, while policymakers must consider regulations to curb AI misuse in self-hosted environments before it becomes an uncontrollable threat.
Continue reading