Category Archives: article

Power Without Alibis: What Remains After You Understand How Cruelty Works

This final essay completes a trilogy on power by asking what remains once its mechanics are fully understood. Building on Pfeffer’s organisational realism and Machiavelli’s historical clarity, it argues that unsanitised descriptions of power do not endorse cruelty but remove the moral alibis that allow harm to persist. By collapsing the distance between action and consequence, such writing makes innocence unavailable and neutrality impossible. The central risk, that truth can be weaponised, is acknowledged, but silence is shown to be more partisan, concentrating power through ignorance rather than constraining it.

Continue reading

From Partitioning to Liquid Clustering: Evolving SCD2 Bronze on Databricks at Scale

As SCD2 Bronze layers mature, even well-designed partitioning and ZORDER strategies can struggle under extreme scale, high-cardinality business keys, and evolving access patterns. This article examines why SCD2 Bronze datasets place unique pressure on static data layouts and introduces Databricks Liquid Clustering as a natural next step in their operational evolution. It explains when Liquid Clustering becomes appropriate, how it fits within regulated Financial Services environments, and how it preserves auditability while improving long-term performance and readiness for analytics and AI workloads.

Continue reading

From The Prince to the Boardroom: Power in Machiavelli and Jeffrey Pfeffer

Comparing Jeffrey Pfeffer with Niccolò Machiavelli reveals a shared realism about power stripped of moral comfort. Both describe influence as driven by perception, control, and strategic action rather than virtue, and both expose why idealism so often fails in practice. Yet they diverge in intent: Machiavelli accepts harm as the price of order, while Pfeffer ultimately confronts the human and organisational costs of power-driven systems. Together, they show that the mechanics of power are historically stable, even as modern leaders are increasingly forced to reckon with their consequences.

Continue reading

From Graph Insight to Action: Decisions, Controls & Remediation in Financial Services Platforms

This article argues that financial services platforms fail not from lack of insight, but from weak architecture between detection and action. Graph analytics and models generate signals, not decisions. Collapsing the two undermines accountability, auditability, and regulatory defensibility. By separating signals, judgements, and decisions; treating decisions as time-qualified data; governing controls as executable policy; and enabling deterministic replay for remediation, platforms can move from reactive analytics to explainable, defensible action. In regulated environments, what matters is not what was known: but what was decided, when, and why.

Continue reading

Jeffrey Pfeffer’s Rules of Power: Truth, Use, and Consequence

Jeffrey Pfeffer’s work strips away comforting myths about merit and leadership to expose how power actually operates inside organisations. Drawing on decades of research, he shows that influence is accumulated through perception, alliances, and control of resources rather than competence alone. While his “rules of power” are descriptively accurate, they are ethically neutral and often corrosive. Pfeffer’s later work confronts the human cost of these systems, forcing leaders to choose between naïve idealism and cynical effectiveness—and to decide whether power will be used merely to win or to change the conditions under which winning occurs.

Continue reading

Networks, Relationships & Financial Crime Graphs on the Bronze Layer

Financial crime rarely appears in isolated records; it emerges through networks of entities, relationships, and behaviours over time. This article explains why financial crime graphs must be treated as foundational, temporal structures anchored near the Bronze layer of a regulated data platform. It explores how relationships are inferred, versioned, and governed, why “known then” versus “known now” matters, and how poorly designed graphs undermine regulatory defensibility. Done correctly, crime graphs provide explainable, rebuildable network intelligence that stands up to scrutiny years later.

Continue reading

The Intersection of the Message: Cicero, Machiavelli, and Ivy Lee

This article explores how Cicero, Machiavelli, and Ivy Lee each used “the message” as a vehicle for power, persuasion, and public control. From Cicero’s moralised rhetoric to Machiavelli’s cunning optics to Ivy Lee’s media-savvy framing, it dissects how messaging intersects with ethics, audience, and intent. Their techniques still shape political campaigns, corporate comms, and crisis response today—proving that the message is never just about words, but always about influence.

Continue reading

Probabilistic & Graph-Based Identity in Regulated Financial Services

This article argues that probabilistic and graph-based identity techniques are unavoidable in regulated Financial Services, but only defensible when tightly governed. Deterministic entity resolution remains the foundation, providing anchors, constraints, and auditability. Probabilistic scores and identity graphs introduce likelihood and network reasoning, not truth, and must be time-bound, versioned, and replayable. When anchored to immutable history, SCD2 discipline, and clear guardrails, these techniques enhance fraud and AML insight; without discipline, they create significant regulatory risk.

Continue reading

Merry Christmas and Happy New Year 2026 from the West Midlands Cyber Hub

As the new year begins, the West Midlands Cyber Hub is delivering an ambitious programme of practical, community-driven cyber events from January to March… with more already in development. This programme is focused on building cyber capability, confidence, and collaboration across the West Midlands, supporting organisations, practitioners, and the wider regional economy.

Continue reading

WTF is the Fellegi–Sunter Model? A Practical Guide to Record Matching in an Uncertain World

The Fellegi–Sunter model is the foundational probabilistic framework for record linkage… deciding whether two imperfect records refer to the same real-world entity. Rather than enforcing brittle matching rules, it treats linkage as a problem of weighing evidence under uncertainty. By modelling how fields behave for true matches versus non-matches, it produces interpretable scores and explicit decision thresholds. Despite decades of new tooling and machine learning, most modern matching systems still rest on this logic… often without acknowledging it.

Continue reading

The Ivy Lee Method: Public Relations, Productivity, and Propaganda

Ivy Lee shaped the modern world of public relations by mastering narrative control, pioneering a simple yet powerful productivity method, and reframing truth as both tactic and ethic. This article explores Lee’s legacy, from press release orchestration and the “two-way street” philosophy to his morally ambiguous work for industrial giants, ending with a practical guide to applying his enduring techniques today.

Continue reading

Aligning the Data Platform to Enterprise Data & AI Strategy

This article establishes the data platform as the execution engine of Enterprise Data & AI Strategy in Financial Services. It bridges executive strategy and technical delivery by showing how layered architecture (Bronze, Silver, Gold, Platinum), embedded governance, dual promotion lifecycles (North/South and East/West), and domain-aligned operating models turn strategic pillars, architecture & quality, governance, security & privacy, process & tools, and people & culture, into repeatable, regulator-ready outcomes. The result is a platform that delivers control, velocity, semantic alignment, and safe AI enablement at scale.

Continue reading

The Zen Story of the Cup of Tea: How Awareness Opens Space

A Zen master overfills a visitor’s teacup to show that a mind already full has no space to receive. The problem is not knowing too much, but being unable to listen. Emptying the cup is not a single insight but an ongoing practice: noticing when we’ve stopped listening and allowing a small pause to create space. Have a cup of tea. Have another one.

Continue reading

Migrating Legacy EDW Slowly-Changing Dimensions to Lakehouse Bronze

From 20-year-old warehouse SCDs to a modern temporal backbone you can trust. This article lays out a practical, regulator-aware playbook for migrating legacy EDW SCD dimensions to a modern SCD2 Bronze layer in a medallion/lakehouse architecture. It covers what you are really migrating (semantics, not just tables), how to treat the EDW as a source system, how to build canonical SCD2 Bronze, how to run both platforms in parallel, and how to prove to auditors and regulators that nothing has been lost or corrupted in the process.

Continue reading

I Have a Full Set of Every Appearance of Flaming Carrot and I’m Not Afraid to Use It

I own every appearance of Flaming Carrot, not as memorabilia but as a working instrument. This is a short essay about absurdity used with discipline: carrots against false seriousness, mockery as a tool, and what happens when you refuse to let power keep its costume. Flaming Carrot isn’t just a forgotten indie gem; he’s symbolic weaponry. Pataphysics writ large.

Continue reading

Enterprise Point-in-Time (PIT) Reconstruction: The Regulatory Playbook

This article sets out the definitive regulatory playbook for enterprise Point-in-Time (PIT) reconstruction in UK Financial Services. It explains why PIT is now a supervisory expectation: driven by PRA/FCA reviews, Consumer Duty, s166 investigations, AML/KYC forensics, and model risk, and makes a clear distinction between “state as known” and “state as now known”. Covering SCD2 foundations, entity resolution, precedence versioning, multi-domain alignment, temporal repair, and reproducible rebuild patterns, it shows how to construct a deterministic, explainable PIT engine that can withstand audit, replay history reliably, and defend regulatory outcomes with confidence.

Continue reading

The Work Speaks for Itself

This article explains why I am stepping back from writing about neurodiversity as a primary lens for my work. Not because the subject no longer matters, but because over time it has begun to obscure achievement rather than illuminate it. This is a reflection on explanation, authority, and the point at which context stops being helpful and starts getting in the way.

Continue reading

Building Regulator-Defensible Enterprise RAG Systems (FCA/PRA/SMCR)

This article defines what regulator-defensible enterprise Retrieval-augmented generation (RAG) looks like in Financial Services (at least in 2025–2026). Rather than focusing on model quality, it frames RAG through the questions regulators actually ask: what information was used, can the answer be reproduced, who is accountable, and how risk is controlled. It sets out minimum standards for context provenance, audit-grade logging, temporal and precedence-aware retrieval, human-in-the-loop escalation, and replayability. The result is a clear distinction between RAG prototypes and enterprise systems that can survive PRA/FCA and SMCR scrutiny.

Continue reading

The Hidden Costs of Masking: What Research and Autistic Voices Reveal

This article explores the hidden psychological, physical, and social costs of autistic masking, drawing on current research and lived experience. Combining academic insight with personal anecdotes, it examines how masking impacts wellbeing, identity, and burnout, and argues that masking is not an individual adaptation but a response to structural neurotypical norms and inequality embedded in modern social and professional life.

Continue reading

Temporal RAG: Retrieving “State as Known on Date X” for LLMs in Financial Services

This article explains why standard Retrieval-Augmented Generation (RAG) silently corrupts history in Financial Services by answering past questions with present-day truth. It introduces Temporal RAG: a regulator-defensible retrieval pattern that conditions every query on an explicit as_of timestamp and retrieves only from Point-in-Time (PIT) slices governed by SCD2 validity, precedence rules, and repair policies. Using concrete implementation patterns and audit reconstruction examples, it shows how to make LLM retrieval reproducible, evidential, and safe for complaints, remediation, AML, and conduct-risk use cases.

Continue reading