Interface to the data landscape: you integrate REST services and extract data from a wide range of sources.
Modern data engineering: you design and automate data pipelines and process and transform data using Apache Spark and Spark Notebooks (PySpark/SparkSQL).
Data modeling: you prepare data for analysis and take care of semantic modeling.
Customer focus: you analyze customer requirements and develop appropriate solutions together with the team.
Teamwork & community: you support junior consultants and working students, contribute to community projects, and share your knowledge, e.g. in blog posts.
Requirements
A completed degree in (business) computer science or equivalent expertise acquired through other means.
Experience with ETL/ELT processes, data modeling, and with MS Azure and/or AWS and Apache Spark.
Fluent in German, both written and spoken.
Experience with Microsoft Fabric and Power BI.
Nice to have: skills in Data Factory/Data Flows, Synapse Analytics, SAP and/or Python, as well as strong analytical aptitude.
Tech Stack
Apache
AWS
Azure
ETL
PySpark
Python
Spark
Benefits
Genuine team spirit instead of an elbow mentality – we live collaboration at eye level: honest and supportive.
Team events that connect – from the summer party to the Christmas party, from team activities to online gaming.
Flexible working hours that fit your life – structure your working day according to your own rhythm.
Open communication & flat hierarchies – short decision-making processes, honest feedback, and real opportunities to help shape the company.
Fitness network membership – stay active and support your health.
BusinessBike – sustainable mobility on two wheels, whether e-bike or a conventional bike.