Tag Archives: AI Gaslighting

When Adoption Becomes The Goal, Risk Becomes Invisible By Design

This article examines how AI risk is obscured when organisations prioritise adoption over governance. Drawing on real-world examples, it argues that widespread AI usage is already endemic; but largely shallow, uncontrolled, and poorly understood. In regulated environments, optimising for uptake before addressing data lifecycle, verification, leakage, and accountability is not innovation, but a dangerous substitution of metrics for responsibility.

Continue reading

The Risks of Self-Hosting DeepSeek: Ethical Controls, Criminal Facilitation, and Manipulative Potential

Self-hosting advanced AI models like DeepSeek grant unparalleled control but poses severe risks if ethical constraints are removed. With relatively simple modifications, users can disable safeguards, enabling AI to assist in cybercrime, fraud, terrorism, and psychological manipulation. Such models could automate hacking, facilitate gaslighting, and fuel disinformation campaigns. The open-source AI community must balance innovation with security, while policymakers must consider regulations to curb AI misuse in self-hosted environments before it becomes an uncontrollable threat.

Continue reading