Validation & reconciliation for Databricks → Snowflake
Turn “it runs” into a measurable parity contract. We prove correctness for Delta MERGE and incremental systems with golden queries, KPI diffs, and integrity simulations—then gate cutover with rollback-ready criteria.
- Input
- Databricks Validation & reconciliation logic
- Output
- Snowflake equivalent (validated)
- Common pitfalls
- Validating only the backfill: parity on a static snapshot doesn’t prove correctness under reruns/late data.
- Spot checks instead of gates: a few sampled rows miss drift in ties and edge windows.
- No tolerance model: teams argue over diffs because thresholds weren’t defined upfront.
Why this breaks
Databricks migrations fail late when teams validate only a one-time backfill and a few spot checks. Delta systems encode correctness in operational behavior: partition overwrite assumptions, MERGE/upsert semantics, retries, and late-arrival corrections. Snowflake can implement equivalent outcomes—but only if the correctness rules are made explicit and tested under stress.
Common drift drivers in Databricks/Delta → Snowflake:
- MERGE semantics drift: match keys, casts, and update predicates differ subtly
- Non-deterministic dedupe: window ordering missing tie-breakers; reruns choose different winners
- Late-arrival behavior: Delta reprocessing windows vs Snowflake staged apply not equivalent by default
- SCD drift: end-dating/current-flag logic breaks under backfills and late updates
- Operational failures: retry behavior changes; failures become silent data issues
Validation must treat the workload as an incremental system, not a static batch.
How conversion works
- Define the parity contract: what must match (facts/dims, KPIs, dashboards) and what tolerances apply (exact vs threshold).
- Build validation datasets: golden inputs, edge cohorts (ties, null-heavy segments), and representative windows (including boundary days).
- Run readiness + execution gates: schemas/types align, dependencies deployed, and jobs run reliably.
- Run layered parity gates: counts/profiles → KPI diffs → targeted row-level diffs where needed.
- Validate incremental integrity (mandatory for MERGE/upserts): idempotency reruns, late-arrival injections, and backfill simulations.
- Gate cutover: define pass/fail thresholds, canary strategy, rollback triggers, and post-cutover monitors.
Supported constructs
Representative validation and reconciliation mechanisms we apply in Databricks → Snowflake migrations.
| Source | Target | Notes |
|---|---|---|
| Delta MERGE/upsert correctness | MERGE parity contract + rerun/late-data simulations | Proves behavior under retries, late arrivals, and backfills. |
| Golden dashboards/queries | Golden query harness + repeatable parameters | Codifies business sign-off into runnable tests. |
| Counts and profiles | Partition-level counts + null/min/max/distinct profiles | Cheap early drift detection before deep diffs. |
| KPI validation | Aggregate diffs by key dimensions + tolerance thresholds | Aligns validation with business meaning. |
| Row-level diffs | Targeted sampling diffs + edge cohort tests | Use deep diffs only where aggregates signal drift. |
| Cutover readiness | Canary gates + rollback criteria + monitors | Prevents ‘successful cutover’ turning into KPI debates. |
How workload changes
| Topic | Databricks / Delta | Snowflake |
|---|---|---|
| Where correctness hides | Job structure + partition overwrite/reprocessing semantics | Explicit staged apply + idempotency contracts |
| Drift drivers | Implicit ordering and casting tolerated | Explicit casts and deterministic ordering required |
| Operational sign-off | Often based on “looks right” dashboard checks | Evidence-based gates + rollback triggers |
Examples
Illustrative parity and integrity checks in Snowflake. Replace schemas, keys, and KPI definitions to match your migration.
-- Row counts by window (Snowflake)
SELECT
TO_DATE(updated_at) AS d,
COUNT(*) AS rows
FROM MART.FACT_ORDERS
WHERE TO_DATE(updated_at) BETWEEN :start_d AND :end_d
GROUP BY 1
ORDER BY 1;Common pitfalls
- Validating only the backfill: parity on a static snapshot doesn’t prove correctness under reruns/late data.
- Spot checks instead of gates: a few sampled rows miss drift in ties and edge windows.
- No tolerance model: teams argue over diffs because thresholds weren’t defined upfront.
- Unstable ordering: ROW_NUMBER/RANK without complete ORDER BY; winners change under retries.
- MERGE scope blind: full-target scans hide problems and spike credits; apply windows must be bounded.
- Ignoring operational signals: no monitors for lag, retries, credit burn, and failure patterns after cutover.
Validation approach
Gate set (layered)
Gate 0 — Readiness
- Schemas, permissions, and warehouses ready
- Dependent assets deployed (UDFs/procedures, reference data, control tables)
Gate 1 — Execution
- Pipelines run reliably under representative volume and concurrency
- Deterministic ordering + explicit casts enforced
Gate 2 — Structural parity
- Row counts by partitions/windows
- Null/min/max/distinct profiles for key columns
Gate 3 — KPI parity
- KPI aggregates by key dimensions
- Top-N and ranking parity validated on tie/edge cohorts
Gate 4 — Incremental integrity (mandatory)
- Idempotency: rerun same micro-batch → no net change
- Late-arrival: inject late updates → only expected rows change
- Backfill safety: replay historical windows → stable SCD and dedupe
- Dedupe stability: duplicates eliminated consistently under retries
Gate 5 — Cutover & monitoring
- Canary criteria + rollback triggers
- Post-cutover monitors: latency, credit burn, failures, and KPI sentinels
Migration steps
- 01
Define the parity contract
Decide what must match (tables, dashboards, KPIs), at what granularity, and with what tolerance thresholds. Identify golden outputs and sign-off owners.
- 02
Create validation datasets and edge cohorts
Select representative windows and cohorts that trigger edge behavior (ties, null-heavy segments, boundary days, late updates).
- 03
Implement layered gates
Start with cheap checks (counts/profiles), then KPI diffs, then deep diffs only where needed. Codify gates into runnable jobs so validation is repeatable.
- 04
Validate incremental integrity
Run idempotency reruns, late-arrival injections, and backfill simulations. These are the scenarios that usually break after cutover if not tested.
- 05
Gate cutover and monitor
Establish canary/rollback criteria and post-cutover monitors for KPIs and pipeline health (latency, credits, failures, queueing).
We define your parity contract, build the golden-query set, and implement layered reconciliation gates—including idempotency reruns and late-data simulations—so drift is caught before production cutover.
Get a validation plan, runnable gates, and sign-off artifacts (diff reports, thresholds, monitors) so Databricks→Snowflake cutover is controlled and dispute-proof.