Use cutting-edge Data technology to deliver world-class data products using a combination of streaming technologies, machine learning and automated data pipelines.
Work in self-organised, cross-functional data teams alongside machine learning engineers, BI engineers and product managers.
Drive continuous improvement to the software engineering and agile working practices of the team.
Contribute to the Technical / Architecture direction of the team.
Requirements
proficient knowledge of Scala and the JVM ecosystem.
familiarity of functional programming paradigms and a willingness to adopt other languages (not only JVM languages).
consistent background in software development in high volume environments.
a pragmatic and open-minded approach to achieving outcomes in the simplest way possible.
worked with stream processing technologies (i.e. Apache Kafka).
experience with AWS services especially Elasticache & ECS.
passionate about software quality, DevOps (i.e. Terraform) and automation.
work well in lean, agile, cross-functional product teams using Scrum and Kanban practices.
a good communicator and comfortable with presenting ideas and outputs to technical and non-technical stakeholders.