Monthly Archives: January 2026

Stolen Valour, Borrowed Honour, and Intellectual Property

This article uses the concept of stolen valour as a metaphor to examine recognition, attribution, and integrity in intellectual property, research, and start-ups. It explores the difference between honour that can be shared and credit that must be earned, arguing that while recognition can be gifted, it only retains meaning when grounded in truth. When attribution is misused, generosity curdles into erasure.

Continue reading

Authority, Truth, and Belief in Financial Services Data Platforms

Financial services data architectures often fail by asking the wrong question: “Which system is the system of record?” This article argues that regulated firms operate with multiple systems of authority, while truth exists outside systems altogether. What data platforms actually manage is institutional belief: what the firm believed at a given time, based on available evidence. By separating authority, truth, and belief, firms can build architectures that preserve history, explain disagreement, and withstand regulatory scrutiny through accountable, reconstructable decision-making.

Continue reading

Cyberpsychology Today: Signal, Noise, and What We’re Actually Talking About

As cyberpsychology gains visibility, it is also losing precision. This article maps how the term is currently used, identifies common category errors, and explains why collapsing distinct domains into a single label weakens both theory and practice. It clarifies the boundary between cyberpsychology and human-factors work, and positions Psyber Inc as downstream application rather than field definition.

Continue reading

Eventual Consistency in Regulated Financial Services Data Platforms

In regulated financial services, eventual consistency is often treated as a technical weakness to be minimised or hidden. This article argues the opposite: eventual consistency is the only honest and defensible consistency model in a multi-system, regulator-supervised institution. Regulators do not require instantaneous agreement: they require explainability, reconstructability, and reasonableness at the time decisions were made. By treating eventual consistency as an explicit architectural and regulatory contract, firms can bound inconsistency, preserve historical belief, and strengthen audit defensibility rather than undermine it.

Continue reading

Why UK Financial Services Data Platforms Must Preserve Temporal Truth for Regulatory Compliance

A Regulatory Perspective (2025–2026). UK Financial Services regulation in 2025–2026 increasingly requires firms to demonstrate not just what is true today, but what was known at the time decisions were made. Across Consumer Duty, s166 reviews, AML/KYC, model risk, and operational resilience, regulators expect deterministic reconstruction of historical belief, supported by traceable evidence. This article explains where that requirement comes from, why traditional current-state platforms fail under scrutiny, and why preserving temporal truth inevitably drives architectures that capture change over time as a foundational control, not a technical preference.

Continue reading

Common Anti-Patterns in Financial Services Data Platforms

Financial Services data platforms rarely fail because of tools, scale, or performance. They fail because architectural decisions are left implicit, applied inconsistently, or overridden under pressure. This article documents the most common and damaging failure modes observed in large-scale FS data platforms: not as edge cases, but as predictable outcomes of well-intentioned instincts applied at the wrong layer. Each pattern shows how trust erodes quietly over time, often remaining invisible until audit, remediation, or regulatory scrutiny exposes the underlying architectural fault lines.

Continue reading

Operationalising Time, Consistency, and Freshness in a Financial Services Data Platform

This article translates the temporal doctrine established in Time, Consistency, and Freshness in a Financial Services Data Platform into enforceable architectural mechanisms. It focuses not on tools or technologies, but on the structural controls required to make time, consistency, and freshness unavoidable properties of a Financial Services (FS) data platform. The objective is simple: ensure that temporal correctness does not depend on developer discipline, operational goodwill, or institutional memory, but is instead enforced mechanically by the platform itself.

Continue reading

Space Elves, Dragons, and the Joy of Kitbashing Exodites

A personal reflection on decades of role-playing games, Warhammer, and kitbashing, exploring a lifelong love of Exodite Eldar, space elves, dragons, and the creative joy of building armies by hand, set against the wider cultural moment where Warhammer 40,000 has finally become something people are proud to admit they love.

Continue reading

Databricks vs Snowflake vs Fabric vs Other Tech with SCD2 Bronze: Choosing the Right Operating Model

Choosing the right platform for implementing SCD2 in the Bronze layer is not a tooling decision but an operating model decision. At scale, SCD2 Bronze forces trade-offs around change capture, merge frequency, physical layout, cost governance, and long-term analytics readiness. Different platforms optimise for different assumptions about who owns those trade-offs. This article compares Databricks, Snowflake, Microsoft Fabric, and alternative technologies through that lens, with practical guidance for Financial Services organisations designing SCD2 Bronze layers that must remain scalable, auditable, and cost-effective over time.

Continue reading

Power Without Alibis: What Remains After You Understand How Cruelty Works

This final essay completes a trilogy on power by asking what remains once its mechanics are fully understood. Building on Pfeffer’s organisational realism and Machiavelli’s historical clarity, it argues that unsanitised descriptions of power do not endorse cruelty but remove the moral alibis that allow harm to persist. By collapsing the distance between action and consequence, such writing makes innocence unavailable and neutrality impossible. The central risk, that truth can be weaponised, is acknowledged, but silence is shown to be more partisan, concentrating power through ignorance rather than constraining it.

Continue reading

From Partitioning to Liquid Clustering: Evolving SCD2 Bronze on Databricks at Scale

As SCD2 Bronze layers mature, even well-designed partitioning and ZORDER strategies can struggle under extreme scale, high-cardinality business keys, and evolving access patterns. This article examines why SCD2 Bronze datasets place unique pressure on static data layouts and introduces Databricks Liquid Clustering as a natural next step in their operational evolution. It explains when Liquid Clustering becomes appropriate, how it fits within regulated Financial Services environments, and how it preserves auditability while improving long-term performance and readiness for analytics and AI workloads.

Continue reading

From The Prince to the Boardroom: Power in Machiavelli and Jeffrey Pfeffer

Comparing Jeffrey Pfeffer with Niccolò Machiavelli reveals a shared realism about power stripped of moral comfort. Both describe influence as driven by perception, control, and strategic action rather than virtue, and both expose why idealism so often fails in practice. Yet they diverge in intent: Machiavelli accepts harm as the price of order, while Pfeffer ultimately confronts the human and organisational costs of power-driven systems. Together, they show that the mechanics of power are historically stable, even as modern leaders are increasingly forced to reckon with their consequences.

Continue reading

From Graph Insight to Action: Decisions, Controls & Remediation in Financial Services Platforms

This article argues that financial services platforms fail not from lack of insight, but from weak architecture between detection and action. Graph analytics and models generate signals, not decisions. Collapsing the two undermines accountability, auditability, and regulatory defensibility. By separating signals, judgements, and decisions; treating decisions as time-qualified data; governing controls as executable policy; and enabling deterministic replay for remediation, platforms can move from reactive analytics to explainable, defensible action. In regulated environments, what matters is not what was known: but what was decided, when, and why.

Continue reading

Jeffrey Pfeffer’s Rules of Power: Truth, Use, and Consequence

Jeffrey Pfeffer’s work strips away comforting myths about merit and leadership to expose how power actually operates inside organisations. Drawing on decades of research, he shows that influence is accumulated through perception, alliances, and control of resources rather than competence alone. While his “rules of power” are descriptively accurate, they are ethically neutral and often corrosive. Pfeffer’s later work confronts the human cost of these systems, forcing leaders to choose between naïve idealism and cynical effectiveness—and to decide whether power will be used merely to win or to change the conditions under which winning occurs.

Continue reading

Networks, Relationships & Financial Crime Graphs on the Bronze Layer

Financial crime rarely appears in isolated records; it emerges through networks of entities, relationships, and behaviours over time. This article explains why financial crime graphs must be treated as foundational, temporal structures anchored near the Bronze layer of a regulated data platform. It explores how relationships are inferred, versioned, and governed, why “known then” versus “known now” matters, and how poorly designed graphs undermine regulatory defensibility. Done correctly, crime graphs provide explainable, rebuildable network intelligence that stands up to scrutiny years later.

Continue reading

The Intersection of the Message: Cicero, Machiavelli, and Ivy Lee

This article explores how Cicero, Machiavelli, and Ivy Lee each used “the message” as a vehicle for power, persuasion, and public control. From Cicero’s moralised rhetoric to Machiavelli’s cunning optics to Ivy Lee’s media-savvy framing, it dissects how messaging intersects with ethics, audience, and intent. Their techniques still shape political campaigns, corporate comms, and crisis response today—proving that the message is never just about words, but always about influence.

Continue reading

Probabilistic & Graph-Based Identity in Regulated Financial Services

This article argues that probabilistic and graph-based identity techniques are unavoidable in regulated Financial Services, but only defensible when tightly governed. Deterministic entity resolution remains the foundation, providing anchors, constraints, and auditability. Probabilistic scores and identity graphs introduce likelihood and network reasoning, not truth, and must be time-bound, versioned, and replayable. When anchored to immutable history, SCD2 discipline, and clear guardrails, these techniques enhance fraud and AML insight; without discipline, they create significant regulatory risk.

Continue reading

Merry Christmas and Happy New Year 2026 from the West Midlands Cyber Hub

As the new year begins, the West Midlands Cyber Hub is delivering an ambitious programme of practical, community-driven cyber events from January to March… with more already in development. This programme is focused on building cyber capability, confidence, and collaboration across the West Midlands, supporting organisations, practitioners, and the wider regional economy.

Continue reading

WTF is the Fellegi–Sunter Model? A Practical Guide to Record Matching in an Uncertain World

The Fellegi–Sunter model is the foundational probabilistic framework for record linkage… deciding whether two imperfect records refer to the same real-world entity. Rather than enforcing brittle matching rules, it treats linkage as a problem of weighing evidence under uncertainty. By modelling how fields behave for true matches versus non-matches, it produces interpretable scores and explicit decision thresholds. Despite decades of new tooling and machine learning, most modern matching systems still rest on this logic… often without acknowledging it.

Continue reading

The Ivy Lee Method: Public Relations, Productivity, and Propaganda

Ivy Lee shaped the modern world of public relations by mastering narrative control, pioneering a simple yet powerful productivity method, and reframing truth as both tactic and ethic. This article explores Lee’s legacy, from press release orchestration and the “two-way street” philosophy to his morally ambiguous work for industrial giants, ending with a practical guide to applying his enduring techniques today.

Continue reading