Home
Jobs
Saved
Resumes
Senior Data Engineer – Integration Hub, Data Pipelines at Cuculus GmbH | JobVerse
JobVerse
Home
Jobs
Recruiters
Companies
Pricing
Blog
Jobs
/
Senior Data Engineer – Integration Hub, Data Pipelines
Cuculus GmbH
Remote
Website
LinkedIn
Senior Data Engineer – Integration Hub, Data Pipelines
India
Full Time
2 weeks ago
No Sponsorship
Apply Now
Key skills
Airflow
Apache
ETL
Java
Kafka
Linux
Postgres
Python
Scala
Spark
SQL
ELT
Data Engineering
Apache Spark
Apache Kafka
Apache Airflow
PostgreSQL
Git
Version Control
SaaS
CI/CD
About this role
Role Overview
Design, build, and maintain robust ETL/ELT data pipelines for batch and streaming workloads.
Implement data ingestion and transformation workflows using Apache Airflow, Apache NiFi, Apache Spark, and Kafka.
Integrate data from multiple sources including REST APIs, files, relational databases, message queues, and external SaaS platforms.
Optimize pipelines for performance, scalability, reliability, and cost efficiency.
Develop and operate a centralized data integration hub that supports multiple upstream and downstream systems.
Build reusable, modular integration components and frameworks.
Ensure high availability, fault tolerance, and observability of data workflows.
Design and manage data warehouses, data lakes, and operational data stores using PostgreSQL and related technologies.
Implement appropriate data modeling strategies for analytical and operational use cases.
Manage schema evolution, metadata, and versioning.
Implement data validation, monitoring, and reconciliation mechanisms to ensure data accuracy and completeness.
Enforce data security best practices, access controls, and compliance with internal governance policies.
Establish logging, alerting, and auditability across pipelines.
Automate data workflows, deployments, and operational processes to support scale and reliability.
Monitor pipelines proactively and troubleshoot production issues.
Improve CI/CD practices for data engineering workflows.
Work closely with data scientists, analysts, backend engineers, and business stakeholders to understand data requirements.
Translate business needs into technical data solutions.
Provide technical guidance and best practices across teams.
Requirements
5+ years of hands-on experience as a Data Engineer or in a similar role.
Proven experience as an individual contributor on at least three end-to-end data engineering projects , from design to production.
Strong hands-on experience with: Apache Airflow / Dagster
Apache NiFi
Apache Spark
Apache Kafka
PostgreSQL
Extensive experience integrating data from APIs, files, databases, and third-party systems.
Strong SQL skills and experience with data modeling.
Solid programming experience in Python and/or Java/Scala.
Experience with Linux environments and version control systems (Git).
Strong problem-solving, debugging, and performance-tuning skills.
Tech Stack
Airflow
Apache
ETL
Java
Kafka
Linux
Postgres
Python
Scala
Spark
SQL
Benefits
Health insurance
Professional development opportunities
Flexible working arrangements
Apply Now
Home
Jobs
Saved
Resumes