Sr. Azure Data Engineer (MS Fabric)

Mumbai, India

Job Title: Data Engineer – Unity Catalog Migration

Responsibilities:

- Plan and execute migration of data assets, tables, and access policies to Databricks Unity Catalog.

- Refactor and optimize data pipelines (ETL/ELT) in Spark, SQL, and Python to align with Unity Catalog structures.

- Update data access logic to leverage account-level governance and external locations, replacing legacy mount points.

- Validate data integrity, permissions, and lineage post-migration; coordinate with data governance teams for audit readiness.

- Troubleshoot migration blockers, automate repeatable tasks, and develop comprehensive documentation for all changes.

Required Skills:

- 5+ years’ experience in data engineering, with demonstrable expertise in Databricks and Unity Catalog migrations.

- Strong knowledge of Python, SQL, and Spark-based data pipeline design.

- Experience with enterprise data governance, role-based security, and large-scale metadata management.

- Excellent interpersonal and communication skills for cross-functional collaboration.

Essential Tools and Utilities

- Databricks UCX Toolkit (databrickslabs/ucx): CLI and automation tool for migration, available on GitHub

- Databricks REST API and CLI: For workspace, job, asset, and access management

- Unity Catalog API

- Databricks Catalog Explorer: UI-based migration and table sync tool

- Azure CLI /Powershell

- Databricks SQL Warehouses and Compute (standard and dedicated access modes)

- Source control (Git) and CI/CD tools for pipeline automation

- Logging, audit, and lineage tools connected to Databricks and Unity Catalog for data quality validation

 

Sr. Azure Data Engineer (MS Fabric)

Mumbai, India

Job Title: Data Engineer – Unity Catalog Migration

Responsibilities:

- Plan and execute migration of data assets, tables, and access policies to Databricks Unity Catalog.

- Refactor and optimize data pipelines (ETL/ELT) in Spark, SQL, and Python to align with Unity Catalog structures.

- Update data access logic to leverage account-level governance and external locations, replacing legacy mount points.

- Validate data integrity, permissions, and lineage post-migration; coordinate with data governance teams for audit readiness.

- Troubleshoot migration blockers, automate repeatable tasks, and develop comprehensive documentation for all changes.

Required Skills:

- 5+ years’ experience in data engineering, with demonstrable expertise in Databricks and Unity Catalog migrations.

- Strong knowledge of Python, SQL, and Spark-based data pipeline design.

- Experience with enterprise data governance, role-based security, and large-scale metadata management.

- Excellent interpersonal and communication skills for cross-functional collaboration.

Essential Tools and Utilities

- Databricks UCX Toolkit (databrickslabs/ucx): CLI and automation tool for migration, available on GitHub

- Databricks REST API and CLI: For workspace, job, asset, and access management

- Unity Catalog API

- Databricks Catalog Explorer: UI-based migration and table sync tool

- Azure CLI /Powershell

- Databricks SQL Warehouses and Compute (standard and dedicated access modes)

- Source control (Git) and CI/CD tools for pipeline automation

- Logging, audit, and lineage tools connected to Databricks and Unity Catalog for data quality validation