Designing and evolving the architecture of our data warehouse and data platform
Transforming complex data models into scalable, production-ready data products
Building, maintaining, and improving data models for large and complex datasets
Designing and optimising ETL/ELT pipelines to support scalable analytics and data products
Establishing reliable, well-governed datasets that serve as a foundation for analytics and AI/ML use cases
Improving data reliability, quality, and observability across key datasets
Collaborating with stakeholders to define and deliver high-impact data products
Driving automation and improvements across a legacy ETL ecosystem (~1500 pipelines)
Contributing to engineering standards, documentation, and best practices within the data engineering team
Staying informed about modern data platform technologies and industry best practices, contributing ideas that help evolve our data stack and engineering practices.
Requirements
Strong SQL expertise, particularly in complex analytical and warehouse queries
5+ years experience in Data Engineering, Analytics Engineering, or Data Platform Engineering
Strong experience with data warehousing, dimensional modelling, and data architecture
Experience designing and delivering scalable data products
Experience with ETL/ELT tooling (DBT, SSIS, Wherescape, or similar)
Strong understanding of data pipeline architecture and distributed data systems
Proficiency in Python or a similar programming language
Understanding of how data platforms support machine learning and AI workflows
Strong grasp of software engineering principles and best practices
Experience contributing to or leading data warehouse architecture or redesign initiatives
Ability to collaborate effectively with technical and non-technical stakeholders