Tag Archives: Regulatory Data Architecture

Series Wrap-Up: Reconstructing Time, Truth, and Trust in UK Financial Services Data Platforms

This series explored how UK Financial Services data platforms can preserve temporal truth, reconstruct institutional belief, and withstand regulatory scrutiny at scale. Beginning with foundational concepts such as SCD2 and event modelling, it developed into a comprehensive architectural pattern centred on an audit-grade Bronze layer, non-SCD Silver consumption, and point-in-time defensibility. Along the way, it addressed operational reality, governance, cost, AI integration, and regulatory expectations. This final article brings the work together, offering a structured map of the series and a coherent lens for understanding how modern, regulated data platforms actually succeed. Taken together, this body of work describes what I refer to as a “land it early, manage it early” data platform architecture for regulated industries.

Continue reading

Golden-Source Resolution, Multi-Source Precedence, and Regulatory Point-in-Time Reporting on SCD2 Bronze

Why Deterministic Precedence Is the Line Between “Data Platform” and “Regulatory Liability”. Modern UK Financial Services organisations ingest customer, account, and product data from 5–20 different systems of record, each holding overlapping and often conflicting truth. Delivering a reliable “Customer 360” or “Account 360” requires deterministic, audit-defensible precedence rules, survivorship logic, temporal correction workflows, and regulatory point-in-time (PIT) reconstructions: all operating on an SCD2 Bronze layer. This article explains how mature banks resolve multi-source conflicts, maintain lineage, rebalance history when higher-precedence data arrives late, and produce FCA/PRA-ready temporal truth. It describes the real patterns used in Tier-1 institutions, and the architectural techniques required to make them deterministic, scalable, and regulator-defensible.

Continue reading