Workload

Stored procedures, macros & UDFs for Databricks → BigQuery

Re-home reusable logic—from Spark SQL UDFs and notebook “macro” utilities to BigQuery UDFs and stored procedures—so behavior stays stable under retries, backfills, and changing upstream schemas.

At a glance
Input
Databricks Stored procedure / UDF migration logic
Output
BigQuery equivalent (validated)
Common pitfalls
  • Hidden dependencies: notebook macros rely on implicit session configs, temp views, or catalogs.
  • Mixed-type branches: CASE/IF returns different types; BigQuery needs explicit casts.
  • NULL semantics drift: null-safe equality and string functions behave differently unless explicit.
Context

Why this breaks

In Databricks estates, “procedural” logic is often distributed: Spark SQL UDFs, notebook helper functions, dbt macros, and job utilities embedded across pipelines. During migration, teams focus on SQL translation and MERGE behavior, but these reusable assets quietly change outputs because typing, NULL semantics, and time handling differ in BigQuery.

Common symptoms after migration:

  • UDF outputs drift due to type coercion and NULL handling differences
  • Notebook macros that relied on Spark functions or implicit session settings stop matching
  • Error handling changes (fail-fast vs continue), breaking orchestration expectations
  • Dynamic SQL and identifier resolution behaves differently in BigQuery
  • Side effects (audit/control tables) aren’t modeled, so retries/backfills double-apply or skip

A successful migration consolidates scattered utilities into explicit BigQuery routines with a behavior contract and a replayable harness that proves parity.

Approach

How conversion works

  1. Inventory & classify reusable logic: Spark SQL UDFs, notebook utilities/macros, dbt macros, and call sites across pipelines and BI queries.
  2. Extract the behavior contract: inputs/outputs, typing, null rules, side effects, error semantics, and performance constraints.
  3. Choose BigQuery target form per asset:
    • SQL UDF for pure expressions
    • JavaScript UDF for complex logic (string/regex/object handling)
    • Stored procedure (SQL scripting) for control flow and multi-statement logic
    • Set-based refactor where loop/cursor patterns exist
  4. Translate and normalize: casts, null-safe comparisons, timezone intent, and deterministic ordering where logic depends on ranking/dedup.
  5. Validate with a harness: golden inputs/outputs, branch coverage, failure-mode tests, and side-effect assertions—then integrate into representative pipelines.

Supported constructs

Representative Databricks procedural/macro constructs we commonly migrate to BigQuery routines (exact coverage depends on your estate).

SourceTargetNotes
Spark SQL UDFsBigQuery SQL UDFsPure expressions mapped with explicit casts and NULL behavior.
Notebook macro utilitiesBigQuery procedures + views/UDFsHidden dependencies made explicit; interfaces stabilized.
dbt macros (logic reuse)BigQuery macros/UDFs/views (depending on use)Refactor into reusable, testable assets with stable contracts.
Dynamic SQL generationEXECUTE IMMEDIATE with safe parameterizationNormalize identifier resolution; reduce drift and injection risk.
Control-table driven rerun behaviorIdempotency markers + applied-window trackingRetries/backfills become safe and auditable.
Timestamp/timezone logicExplicit timezone conversions and intentPrevents boundary-day drift in reporting and SCD logic.

How workload changes

TopicDatabricksBigQuery
Where logic livesNotebooks, jobs, Spark UDFs, scattered utility codeCentralized routines (UDFs/procedures) with explicit contracts
Typing and coercionSpark implicit casts often toleratedExplicit casts recommended for stable outputs
Operational behaviorReruns/retries often encoded in job structureIdempotency and side effects must be explicit
Where logic lives: Migration consolidates and stabilizes reusable logic.
Typing and coercion: Validation focuses on mixed-type branches and join keys.
Operational behavior: Harness proves behavior under retries/backfills.

Examples

Illustrative patterns for moving Databricks UDFs and macro utilities into BigQuery routines. Adjust datasets, types, and identifiers to match your environment.

-- BigQuery SQL UDF example
CREATE OR REPLACE FUNCTION `proj.util.safe_div`(n NUMERIC, d NUMERIC) AS (
  IF(d IS NULL OR d = 0, NULL, n / d)
);
Avoid

Common pitfalls

  • Hidden dependencies: notebook macros rely on implicit session configs, temp views, or catalogs.
  • Mixed-type branches: CASE/IF returns different types; BigQuery needs explicit casts.
  • NULL semantics drift: null-safe equality and string functions behave differently unless explicit.
  • Dynamic SQL risks: string concatenation without bindings/escaping causes drift and security risk.
  • Side effects ignored: control/audit writes not recreated; reruns become unsafe.
  • Row-by-row patterns: loops in notebooks are costly in BigQuery; refactor to set-based SQL.
  • No harness: without replayable tests, parity becomes a debate at cutover.
Proof

Validation approach

  • Compile + interface checks: each routine deploys; argument/return types match the contract.
  • Golden tests: curated input sets validate outputs, including NULL-heavy and boundary values.
  • Branch + failure-mode coverage: exercise key branches and expected failures.
  • Side-effect verification: assert expected writes to control/log/audit tables and idempotency under retries.
  • Integration replay: run routines inside representative pipelines and compare downstream KPIs/aggregates.
  • Performance gate: confirm no hidden row-by-row scans; refactors validated with bytes scanned/runtime baselines.
Execution

Migration steps

A sequence that keeps behavior explicit, testable, and safe to cut over.
  1. 01

    Inventory reusable logic and call sites

    Collect Spark SQL UDFs, notebook utilities/macros, and dbt macros, plus call sites across pipelines and BI queries. Identify dependencies and side effects (control/audit tables).

  2. 02

    Define the behavior contract

    Specify inputs/outputs, typing, NULL rules, error semantics, side effects, and performance expectations. Decide target form (SQL UDF, JS UDF, procedure, or refactor).

  3. 03

    Convert with safety patterns

    Make casts explicit, normalize timezone intent, implement null-safe comparisons, and migrate dynamic SQL using EXECUTE IMMEDIATE with bindings and explicit identifier rules.

  4. 04

    Build a replayable harness

    Create golden input sets, boundary cases, and expected failures. Validate outputs and side effects deterministically so parity isn’t debated at cutover.

  5. 05

    Integrate and cut over behind gates

    Run routines in representative pipelines, compare downstream KPIs, validate retries/backfills, and cut over with rollback-ready criteria.

Workload Assessment
Migrate reusable logic with a test harness

We inventory your UDFs and macros, migrate a representative subset into BigQuery routines, and deliver a harness that proves parity—including side effects and failure behavior.

Migration Acceleration
Cut over routines with proof-backed sign-off

Get a conversion plan, review markers for ambiguous intent, and validation artifacts so procedural logic cutover is gated by evidence and rollback criteria.

FAQ

Frequently asked questions

What are “macros” in a Databricks context?+
They’re often notebook utilities, shared SQL snippets, or dbt macro logic that behaves like reusable functions. We migrate them into BigQuery-native assets (UDFs, views, procedures) with explicit contracts.
Do we have to move everything into stored procedures?+
No. Many utilities belong as SQL UDFs or views, and some procedural logic is best refactored into set-based SQL. We choose the target form per asset to minimize risk and cost.
How do you prove parity for UDFs and procedures?+
We build a replayable harness with golden inputs/outputs, branch and failure-mode coverage, and side-effect assertions. Integration replay validates downstream KPIs before cutover.
What’s the biggest risk area?+
Implicit typing and NULL behavior differences, plus hidden side effects used for restartability. We make these explicit and validate under retries/backfills so behavior remains stable.