Kraken is a mission-focused company rooted in crypto values, aiming to accelerate the global adoption of crypto. The Data Platform Engineering Manager will lead a team responsible for building real-time streaming infrastructure and managing the architecture of high-volume data systems to support various teams within the organization.
Responsibilities:
- Lead and grow a team of senior data platform engineers building Kraken's real-time streaming infrastructure
- Own the architecture and roadmap for high-volume low-frequency data systems, with focus on data stack like Spark, Kafka, Iceberg, RisingWave, Apache Flink
- Design and operate scalable data architecture that serve trading, risk, compliance, analytics and many product teams
- Drive adoption of AI automation and intelligent workflows — automating data quality checks, pipeline orchestration, anomaly detection, and self-healing infrastructure
- Partner with ML/AI, analytics, and product engineering teams to deliver platform capabilities that accelerate their work
- Evolve Kraken's data-lake and warehouse architecture to support both batch and streaming workloads seamlessly
- Set technical direction for the team — balancing reliability, velocity, and cost efficiency at scale
- Hire, mentor, and retain top-tier platform engineers; build a culture of ownership and technical excellence
Requirements:
- 8+ years in data engineering, platform engineering, or distributed systems — with at least 3 years managing engineering teams
- Experience and knowledge of building data-lakes in AWS (i.e. Spark, Athena, Iceberg, Parquet, Presto), including data modeling, data quality best practices, and self-service tooling
- Strong expertise in building and operating real-time data at scale including Kafka, Spark Streaming, Debezium, and CDC pipelines
- Proven ability to manage competing priorities across multiple stakeholder groups — aligning platform investments with the needs of product, finance, compliance, analytics, and other teams
- Strong communicator — able to explain risks, trade-offs, and roadmap decisions to both senior technical audiences and non-specialist stakeholders
- Experience designing or adopting AI/ML-powered automation in data workflows — pipeline orchestration, intelligent monitoring, automated remediation, or LLM-integrated tooling
- Proficiency in Python, Scala, or Java in a production data platform context
- Solid understanding of cloud-native data infrastructure (AWS preferred — Glue, Athena, S3, EMR, Lambda, or equivalents)
- Track record of managing, recruiting, and developing high-performing remote engineering teams
- Ability to translate long-term platform vision into executable quarterly roadmaps
- Servant-leadership style — you coach, unblock, and grow your engineers
- AI-ready to 10X the team efficiency and overall output
- Experience with RisingWave and/or Clickhouse specifically — either in production or in serious evaluation
- Familiarity with LLM-based agents or AI workflow frameworks (e.g. LangChain, LangGraph, custom orchestration)
- Background in cryptocurrency, trading systems, or high-throughput financial data
- Experience building self-service data platform tooling for internal engineering consumers
- Contributions to open-source streaming or data infrastructure projects