Designing, implementing, and maintaining data pipelines and architectures
Working closely with clients to understand their data needs and realise tailored solutions
Navigating the world of Big Data, helping clients make their data business
or operations-ready
Accessing diverse data sources efficiently and reproducibly
Designing, testing, and monitoring distributed data processing pipelines
Collaborating with data scientists in the creation of machine learning models
Performing performance and cost estimations for various use cases
Developing and creating meaningful dashboards to visualize processed data.
Requirements
Several years of experience in technical roles
Expertise in Python and SQL
Hands-on experience or strong interest in technologies such as Hadoop, Apache Spark, Hive, Airflow, RDBMS, NoSQL, DevOps, Kubernetes, and Java or .NET
Good understanding of the capabilities in large public cloud environments (AWS, MS Azure, Google’s GCP)
University degree (ETH, Uni, FH) in Computer Science, Mathematics, or a similar field
Fluency in both German and English is required.
Tech Stack
Airflow
Apache
AWS
Azure
Cloud
Google Cloud Platform
Hadoop
Java
Kubernetes
NoSQL
Python
RDBMS
Spark
SQL
.NET
Benefits
Work life blend: we offer a safe & healthy workplace, with flexible working hours and the possibility to work from home
Profit share scheme: In addition to your annual salary, you may receive a profit share defined by the company’s success in the previous year
Global and Diverse Zühlke community: witness how colleagues from all our 17 offices across the globe come together to create a unique, positive and inclusive work culture, learning from one another at annual team camps, and celebrating year-end parties and other local festivities.
Committed to development: we are committed to the growth of our people and are investing in your development. We’re empowering you to build the skills you need to make a positive impact, both personally and for our clients, today and in the future.