Tag Archives: Snowflake

Managing a Rapidly Growing SCD2 Bronze Layer on Snowflake: Best Practices and Architectural Guidance

Slowly Changing Dimension Type 2 (SCD2) patterns are widely used in Snowflake-based Financial Services platforms to preserve full historical change for regulatory, analytical, and audit purposes. However, Snowflake’s architecture differs fundamentally from file-oriented lakehouse systems, requiring distinct design and operational choices. This article provides practical, production-focused guidance for operating large-scale SCD2 Bronze layers on Snowflake. It explains how to use Streams, Tasks, micro-partition behaviour, batching strategies, and cost-aware configuration to ensure predictable performance, controlled spend, and long-term readiness for analytics and AI workloads in regulated environments.

Continue reading

From SCD2 Bronze to a Non-SCD Silver Layer in Snowflake

This article explains a best-practice Snowflake pattern for transforming an SCD2 Bronze layer into a non-SCD Silver layer that exposes clean, current-state data. By retaining full historical truth in Bronze and using Streams, Tasks, and incremental MERGE logic, organisations can efficiently materialise one-row-per-entity Silver tables optimised for analytics. The approach simplifies governance, reduces cost, and delivers predictable performance for BI, ML, and regulatory reporting, while preserving complete auditability required in highly regulated financial services environments.

Continue reading

Advanced SCD2 Optimisation Techniques for Mature Data Platforms

Advanced SCD2 optimisation techniques are essential for mature Financial Services data platforms, where historical accuracy, regulatory traceability, and scale demands exceed the limits of basic SCD2 patterns. Attribute-level SCD2 significantly reduces storage and computation by tracking changes per column rather than per row. Hybrid SCD2 pipelines, combining lightweight delta logs with periodic MERGEs into the main Bronze table, minimise write amplification and improve reliability. Hash-based and probabilistic change detection eliminate unnecessary updates and accelerate temporal comparison at scale. Together, these techniques enable high-performance, audit-grade SCD2 in platforms such as Databricks, Snowflake, BigQuery, Iceberg, and Hudi, supporting the long-term data lineage and reconstruction needs of regulated UK Financial Services institutions.

Continue reading

Using SCD2 in the Bronze Layer with a Non-SCD2 Silver Layer: A Modern Data Architecture Pattern for UK Financial Services

UK Financial Services firms increasingly implement SCD2 history in the Bronze layer while providing simplified, non-SCD2 current-state views in the Silver layer. This pattern preserves full historical auditability for FCA/PRA compliance and regulatory forensics, while delivering cleaner, faster, easier-to-use datasets for analytics, BI, and data science. It separates “truth” from “insight,” improves governance, supports Data Mesh models, reduces duplicated logic, and enables deterministic rebuilds across the lakehouse. In regulated UK Financial Services today, it is the only pattern I have seen that satisfies the full, real-world constraint set with no material trade-offs.

Continue reading

Databricks vs Snowflake vs Microsoft Fabric: Positioning the Future of Enterprise Data Platforms

This article extends the Databricks vs Snowflake comparison to include Microsoft Fabric, exploring the platforms’ philosophical roots, architectural approaches, and strategic trade-offs. It positions Fabric not as a direct competitor but as a consolidation play for Microsoft-centric organisations, and introduces Microsoft Purview as the governance layer that unifies divergent estates. Drawing on real enterprise patterns where Databricks underpins engineering, Fabric drives BI adoption, and functional teams risk fragmentation, the piece outlines the “Build–Consume–Govern” model and a phased transition plan. The conclusion emphasises orchestration across platforms, not choosing a single winner, as the path to a governed, AI-ready data estate.

Continue reading

Databricks vs Snowflake: A Critical Comparison of Modern Data Platforms

This article provides a critical, side-by-side comparison of Databricks and Snowflake, drawing on real-world experience leading enterprise data platform teams. It covers their origins, architecture, programming language support, workload fit, operational complexity, governance, AI capabilities, and ecosystem maturity. The guide helps architects and data leaders understand the philosophical and technical trade-offs, whether prioritising AI-native flexibility and open-source alignment with Databricks or streamlined governance and SQL-first simplicity with Snowflake. Practical recommendations, strategic considerations, and guidance by team persona equip readers to choose or combine these platforms to align with their data strategy and talent strengths.

Continue reading