AzureETLPythonSparkSQLUnityAIMLELTData EngineeringAnalyticsDatabricksVersion Control
About this role
Role Overview
Develop and maintain scalable ELT/ETL pipelines in Azure Data Factory and Databricks, supporting both existing and new data sources.
Build, refine and optimise transformation logic using Databricks notebooks, Python, SQL and Spark.
Model data across Bronze, Silver and Gold layers to ensure structure, consistency and clarity.
Collaborate with colleagues across architecture, finance, operations, and technology to understand data needs and translate them into well-designed models and solutions.
Ensure data quality through validation, testing and clear documentation.
Implement data governance controls and access policies with Unity Catalog.
Support business-critical system transformations by managing data migrations and ensuring data quality and continuity.
Champion best practices in data engineering, analytics, version control and development workflows.
Requirements
Bachelor’s Degree in Computer Science, Data Engineering, Information Systems or related STEM field.
Minimum 2+ years’ experience in analytics engineering, data engineering or data platform development.
Experience with Azure, Microsoft 365, Power Platform and Microsoft Graph API.
Exposure to AI/ML workflows and understanding of data requirements for model training and deployment.
Familiarity with Architectural, Design or Creative industries is a plus.