Build and maintain reliable data pipelines to ingest and transform data from enterprise systems, including finance, CRM, HR, and collaboration platforms
Develop and manage data transformations to support business intelligence and analytical use cases
Contribute to the design and implementation of dimensional and analytics-friendly data models
Work with senior engineers to implement data engineering best practices, standards, and governance controls
Support data quality, validation, and monitoring processes to ensure trusted reporting and insights
Collaborate with analysts, engineers, and business stakeholders to understand reporting requirements and translate them into technical solutions
Contribute to documentation, data lineage, and technical knowledge sharing across the team
Requirements
Experience delivering data engineering solutions in a production environment
Practical experience with:
SQL for data transformation and modelling
Python for data processing or orchestration
Cloud-based or modern data platforms (e.g. Microsoft Fabric or equivalent)
Experience ingesting data from relational databases and/or SaaS platforms via APIs
Understanding of core data engineering concepts such as:
Data modelling (e.g. dimensional models)
Data quality and validation
Basic data governance principles
Strong analytical and problem-solving skills, with attention to detail
Good communication skills, with the ability to work effectively with technical and non-technical stakeholders
Ability to manage tasks independently while contributing effectively within a team
Exposure to PySpark or notebook-based data development (desirable)
Familiarity with enterprise systems such as ERP, CRM, or HR platforms (desirable)
Experience working in a governed, multi-environment (DEV / TEST / PROD) data platform (desirable)
Awareness of CI/CD or DevOps practices for data engineering (desirable)
Interest in building scalable and maintainable data platforms in a fast-growing organisation (desirable)