Author Archives: Wayne Horkan

About Wayne Horkan

I’m a technologist and engineer, typically working in enterprise architecture and systems engineering.

From Threat Model to Regulator Narrative: Security Architecture for Regulated Financial Services Data Platforms

This article reframes security as an architectural property of regulated financial services data platforms, not a bolt-on set of controls. It argues that true security lies in preserving temporal truth, enforcing authority over data, and enabling defensible reconstruction of decisions under scrutiny. By grounding security in threat models, data semantics, SCD2 foundations, and regulator-facing narratives, the article shows how platforms can prevent silent history rewriting, govern AI safely, and treat auditability as a first-class security requirement.

Continue reading

Collapsing the Medallion: Layers as Patterns, Not Physical Boundaries

The medallion model was never meant to be a physical storage mandate. It is a pattern language for expressing guarantees about evidence, interpretation, and trust. In mature, regulated platforms, those guarantees increasingly live in contracts, lineage, governance, and tests: not in rigid physical layers. Collapsing the medallion does not weaken regulatory substantiation; it strengthens it by decoupling invariants from layout. This article explains why layers were necessary, why they eventually collapse, and what must never be lost when they do.

Continue reading

Structuring Cyberpsychology: From Foundations to Practice

This article sets out the structure of a cyberpsychology curriculum designed to address the coherence gap identified in Cyberpsychology Today. Rather than treating cyberpsychology as a loose collection of effects, this framework organises the field from foundational theory through to applied practice. The phases that follow are not arbitrary. They reflect the minimum conceptual spine required to study how persistent, mediated digital environments shape human psychology, and how that knowledge can be responsibly translated into research, policy, and real-world intervention. What follows is not a manifesto, but an architecture for learning.

Continue reading

From Writes to Reads: Applying CQRS Thinking to Regulated Data Platforms

In regulated financial environments, data duplication is often treated as a failure rather than a necessity. Command Query Responsibility Segregation (CQRS) is an approach to separate concerns such as reads versus writes. This article reframes duplication through CQRS-style thinking, arguing that separating write models (which execute actions) from read models (which explain outcomes) is essential for both safe operation and regulatory defensibility. By making authority explicit and accepting eventual consistency, institutions can act in real time while reconstructing explainable, auditable belief over time. CQRS is presented not as a framework, but as a mental model for survivable data platforms.

Continue reading

Edge Systems Are a Feature: Why OLTP, CRM, and Low-Latency Stores Must Exist

Modern data platforms often treat operational systems as legacy constraints to be eliminated. This article argues the opposite. Transactional systems, CRM platforms, and low-latency decision stores exist because some decisions must be made synchronously, locally, and with authority. These “edge systems” are not architectural debt but purpose-built domains of control. A mature data platform does not replace them or centralise authority falsely; it integrates with them honestly, preserving their decisions, context, and evolution over time.

Continue reading

Blobs as First-Class Artefacts in Regulated Data Platforms

In regulated financial services, semi-structured payloads such as XML, JSON, PDFs, and messages are not “raw data” to be discarded after parsing: they are primary evidence. This article argues that blobs must be treated as first-class artefacts: preserved intact, timestamped, queryable, and reinterpretable over time. Relational models are interpretations that evolve; original payloads anchor truth. Platforms that discard or mutate artefacts optimise for neatness today at the cost of defensibility tomorrow.

Continue reading

Serving the Community: The Story of Servol, Charles Jordan MBE and What Was, What Is, and What Could Be

This article traces the history of Servol Community Services from its founding in 1979 by Charles Jordan MBE, exploring its mission, growth, and the pressures facing modern social-care charities. It reflects on recognition, institutional memory, and the quiet sacrifices behind community work, ending with a personal appeal to honour the charity’s origins and reconnect present delivery with its founding spirit.

Continue reading

Why Transactions Are Events, Not Slowly Changing Dimensions

This article argues that modelling transactions as slowly changing dimensions is a fundamental category error in financial data platforms. Transactions are immutable events that occur once and do not change; what evolves is the organisation’s interpretation of them through enrichment, classification, and belief updates. Applying SCD2 logic to transactions conflates fact with interpretation, corrupts history, and undermines regulatory defensibility. By separating immutable event records from mutable interpretations, platforms become clearer, auditable, and capable of reconstructing past decisions without rewriting reality.

Continue reading

Stolen Valour, Borrowed Honour, and Intellectual Property

This article uses the concept of stolen valour as a metaphor to examine recognition, attribution, and integrity in intellectual property, research, and start-ups. It explores the difference between honour that can be shared and credit that must be earned, arguing that while recognition can be gifted, it only retains meaning when grounded in truth. When attribution is misused, generosity curdles into erasure.

Continue reading

Authority, Truth, and Belief in Financial Services Data Platforms

Financial services data architectures often fail by asking the wrong question: “Which system is the system of record?” This article argues that regulated firms operate with multiple systems of authority, while truth exists outside systems altogether. What data platforms actually manage is institutional belief: what the firm believed at a given time, based on available evidence. By separating authority, truth, and belief, firms can build architectures that preserve history, explain disagreement, and withstand regulatory scrutiny through accountable, reconstructable decision-making.

Continue reading

Cyberpsychology Today: Signal, Noise, and What We’re Actually Talking About

As cyberpsychology gains visibility, it is also losing precision. This article maps how the term is currently used, identifies common category errors, and explains why collapsing distinct domains into a single label weakens both theory and practice. It clarifies the boundary between cyberpsychology and human-factors work, and positions Psyber Inc as downstream application rather than field definition.

Continue reading

Eventual Consistency in Regulated Financial Services Data Platforms

In regulated financial services, eventual consistency is often treated as a technical weakness to be minimised or hidden. This article argues the opposite: eventual consistency is the only honest and defensible consistency model in a multi-system, regulator-supervised institution. Regulators do not require instantaneous agreement: they require explainability, reconstructability, and reasonableness at the time decisions were made. By treating eventual consistency as an explicit architectural and regulatory contract, firms can bound inconsistency, preserve historical belief, and strengthen audit defensibility rather than undermine it.

Continue reading

Why UK Financial Services Data Platforms Must Preserve Temporal Truth for Regulatory Compliance

A Regulatory Perspective (2025–2026). UK Financial Services regulation in 2025–2026 increasingly requires firms to demonstrate not just what is true today, but what was known at the time decisions were made. Across Consumer Duty, s166 reviews, AML/KYC, model risk, and operational resilience, regulators expect deterministic reconstruction of historical belief, supported by traceable evidence. This article explains where that requirement comes from, why traditional current-state platforms fail under scrutiny, and why preserving temporal truth inevitably drives architectures that capture change over time as a foundational control, not a technical preference.

Continue reading

Common Anti-Patterns in Financial Services Data Platforms

Financial Services data platforms rarely fail because of tools, scale, or performance. They fail because architectural decisions are left implicit, applied inconsistently, or overridden under pressure. This article documents the most common and damaging failure modes observed in large-scale FS data platforms: not as edge cases, but as predictable outcomes of well-intentioned instincts applied at the wrong layer. Each pattern shows how trust erodes quietly over time, often remaining invisible until audit, remediation, or regulatory scrutiny exposes the underlying architectural fault lines.

Continue reading

Operationalising Time, Consistency, and Freshness in a Financial Services Data Platform

This article translates the temporal doctrine established in Time, Consistency, and Freshness in a Financial Services Data Platform into enforceable architectural mechanisms. It focuses not on tools or technologies, but on the structural controls required to make time, consistency, and freshness unavoidable properties of a Financial Services (FS) data platform. The objective is simple: ensure that temporal correctness does not depend on developer discipline, operational goodwill, or institutional memory, but is instead enforced mechanically by the platform itself.

Continue reading

Space Elves, Dragons, and the Joy of Kitbashing Exodites

A personal reflection on decades of role-playing games, Warhammer, and kitbashing, exploring a lifelong love of Exodite Eldar, space elves, dragons, and the creative joy of building armies by hand, set against the wider cultural moment where Warhammer 40,000 has finally become something people are proud to admit they love.

Continue reading

Databricks vs Snowflake vs Fabric vs Other Tech with SCD2 Bronze: Choosing the Right Operating Model

Choosing the right platform for implementing SCD2 in the Bronze layer is not a tooling decision but an operating model decision. At scale, SCD2 Bronze forces trade-offs around change capture, merge frequency, physical layout, cost governance, and long-term analytics readiness. Different platforms optimise for different assumptions about who owns those trade-offs. This article compares Databricks, Snowflake, Microsoft Fabric, and alternative technologies through that lens, with practical guidance for Financial Services organisations designing SCD2 Bronze layers that must remain scalable, auditable, and cost-effective over time.

Continue reading

Power Without Alibis: What Remains After You Understand How Cruelty Works

This final essay completes a trilogy on power by asking what remains once its mechanics are fully understood. Building on Pfeffer’s organisational realism and Machiavelli’s historical clarity, it argues that unsanitised descriptions of power do not endorse cruelty but remove the moral alibis that allow harm to persist. By collapsing the distance between action and consequence, such writing makes innocence unavailable and neutrality impossible. The central risk, that truth can be weaponised, is acknowledged, but silence is shown to be more partisan, concentrating power through ignorance rather than constraining it.

Continue reading

From Partitioning to Liquid Clustering: Evolving SCD2 Bronze on Databricks at Scale

As SCD2 Bronze layers mature, even well-designed partitioning and ZORDER strategies can struggle under extreme scale, high-cardinality business keys, and evolving access patterns. This article examines why SCD2 Bronze datasets place unique pressure on static data layouts and introduces Databricks Liquid Clustering as a natural next step in their operational evolution. It explains when Liquid Clustering becomes appropriate, how it fits within regulated Financial Services environments, and how it preserves auditability while improving long-term performance and readiness for analytics and AI workloads.

Continue reading

From The Prince to the Boardroom: Power in Machiavelli and Jeffrey Pfeffer

Comparing Jeffrey Pfeffer with Niccolò Machiavelli reveals a shared realism about power stripped of moral comfort. Both describe influence as driven by perception, control, and strategic action rather than virtue, and both expose why idealism so often fails in practice. Yet they diverge in intent: Machiavelli accepts harm as the price of order, while Pfeffer ultimately confronts the human and organisational costs of power-driven systems. Together, they show that the mechanics of power are historically stable, even as modern leaders are increasingly forced to reckon with their consequences.

Continue reading