Design and implement end-to-end data pipelines to collect, process and transform data from multiple sources.
Develop and maintain scalable data platforms to enable efficient data storage, retrieval and analysis.
Design, implement, and maintain analytics data models that support reporting, KPI tracking and self-serve analytics use cases.
Develop and maintain transformation pipelines using tools such as dbt to turn raw data into clean, tested and well-documented datasets.
Actively adopt and champion AI-assisted development tools and practices to accelerate engineering delivery and progressively automate routine data engineering and analytics tasks.
Develop and maintain self-serve analytics capabilities (e.g. curated datasets, semantic layers) that empower stakeholders to answer questions independently.
Collaborate with stakeholders to translate business and data requirements into robust technical designs.
Implement and maintain CI/CD pipelines to ensure consistent, testable and reliable data engineering and analytics deployments.
Implement data governance frameworks and ensure compliance with security, privacy, and regulatory standards.
Monitor data infrastructure performance, troubleshooting issues proactively to ensure reliability and scalability.
Requirements
Proven experience designing, building, and maintaining data platforms and pipelines in cloud environments (AWS preferred).
Proven experience with cloud data warehouses (AWS Redshift preferred).
Strong programming skills in Python and SQL.
Experience deploying dbt or similar frameworks for analytics engineering and data transformation.
Experience implementing and utilising AI-assisted development tools (e.g. Claude Code, GitHub Copilot, Cursor or similar LLM-powered coding assistants).
Experience with Git and version control best practices.
Strong collaboration skills to work effectively with cross-functional teams.