As a technical solution and implementation partner, you contribute to demanding customer projects, from architecture conception to the operational implementation of productive data platforms
Building, enhancing and maintaining Azure
and Databricks-based data platforms are central responsibilities
You develop and optimize ETL/ELT pipelines with PySpark, Python and SQL, integrate heterogeneous source systems and design modern data lake and data warehouse architectures
Implementing and improving CI/CD processes in the data domain (e.g., Databricks Asset Bundles, Azure DevOps) is an integral part of your role
You are responsible for the operationalization of machine learning models, including deployment, monitoring and integration into existing data pipelines
Developing and optimizing database solutions (MS SQL, and ideally Oracle) as well as close coordination with business units, data scientists and IT teams round off your responsibilities
Requirements
Several years of practical experience building and evolving data platforms based on Azure and Databricks
Solid expertise in developing ETL/ELT processes with Python, PySpark and SQL
Experience in data warehouse, data lake or database development
Practical experience in the operationalization of machine learning models
Structured way of working, ownership mentality and confident demeanor in customer interactions
You are business-fluent (negotiation-level)
Tech Stack
Azure
ETL
Oracle
PySpark
Python
SQL
Benefits
Work–life balance: 40-hour workweek, hybrid working model (60% remote), 30 vacation days per year (+ 2 company event days)
Attractive benefits: In addition to competitive compensation (€85–95k + bonus) you benefit from multi-day company events, a company bike (JobRad), an in-house gym and a public transport pass
Development: Individual technical and personal training opportunities as well as targeted development programs