Tag Archives: Weaponised AI

The Ides of March: Reflections on Cyber, Startups, and Scaling Innovation

The Ides of March is a fitting time to reflect on betrayal, resilience, and the realities of UK cybersecurity. In the past two weeks, I’ve balanced DSIT’s Cyber Local funding process, chaired the West Midlands Cyber Working Group (WM CWG), led two funding bids, scaled one startup in a brutal funding climate, and booted up a second from scratch. Along the way, I’ve won the Pitch Battle at Cyber Runway Live, launched the UK’s first dedicated universal cyber risk score and comparison site, and tackled everything from weaponised AI threats to Kafka-powered scalability, all while navigating the messy, unpredictable, and often painful journey of building something that lasts.

Continue reading

The Risks of Self-Hosting DeepSeek: Ethical Controls, Criminal Facilitation, and Manipulative Potential

Self-hosting advanced AI models like DeepSeek grant unparalleled control but poses severe risks if ethical constraints are removed. With relatively simple modifications, users can disable safeguards, enabling AI to assist in cybercrime, fraud, terrorism, and psychological manipulation. Such models could automate hacking, facilitate gaslighting, and fuel disinformation campaigns. The open-source AI community must balance innovation with security, while policymakers must consider regulations to curb AI misuse in self-hosted environments before it becomes an uncontrollable threat.

Continue reading