Stott and May is working with a digital client hiring a Senior Data Engineer to build and scale large, distributed data pipelines. The role focuses on pipeline design, streaming architecture, and high volume data processing in production, joining an established team working on real-time and batch data systems at scale.
Responsibilities:
- Building large scale batch and streaming data pipelines
- Working heavily with Kafka or similar event streaming platforms
- Developing distributed processing with Spark or Flink
- Optimizing throughput, reliability, and data quality
- Collaborating with data, analytics, and platform teams
Requirements:
- Strong experience as a Data Engineer on large distributed systems
- Kafka in production environments
- Spark or similar distributed processing frameworks
- Strong Python or Java, solid SQL
- Cloud experience preferred, but we are open to engineers from GCP, Azure, or on prem backgrounds
- Flink
- Terraform exposure
- Experience supporting downstream analytics or ML