Tag Archives: incremental processing

From SCD2 Bronze to a Non-SCD Silver Layer in Other Tech (Iceberg, Hudi, BigQuery, Fabric)

Modern data platforms consistently separate historical truth from analytical usability by storing full SCD2 history in a Bronze layer and exposing a simplified, current-state Silver layer. Whether using Apache Iceberg, Apache Hudi, Google BigQuery, or Microsoft Fabric, the same pattern applies: Bronze preserves immutable, auditable change history, while Silver removes temporal complexity to deliver one row per business entity. Each platform implements this differently, via snapshots, incremental queries, QUALIFY, or Delta MERGE, but the architectural principle remains universal and essential for regulated environments.

Continue reading

From SCD2 Bronze to a Non-SCD Silver Layer in Snowflake

This article explains a best-practice Snowflake pattern for transforming an SCD2 Bronze layer into a non-SCD Silver layer that exposes clean, current-state data. By retaining full historical truth in Bronze and using Streams, Tasks, and incremental MERGE logic, organisations can efficiently materialise one-row-per-entity Silver tables optimised for analytics. The approach simplifies governance, reduces cost, and delivers predictable performance for BI, ML, and regulatory reporting, while preserving complete auditability required in highly regulated financial services environments.

Continue reading