Nerdery is a digital product consultancy that partners with clients to enhance their digital journey through innovative technology solutions. The Senior Data Engineer role involves designing and implementing data architecture and analytics platforms to drive business insights, focusing on building scalable data platforms on AWS and modernizing legacy infrastructures.
Responsibilities:
- Design and implement end-to-end data pipelines and ETL/ELT processes
- Architect scalable data platforms using patterns such as medallion/multi-hop, data mesh, or lakehouse
- Develop data-processing logic using Python (Pandas, PySpark, Polars) and advanced SQL
- Automate data workflows using orchestration tools like Airflow/MWAA and Step Functions
- Implement Infrastructure-as-Code (Terraform/CloudFormation) and CI/CD for data workloads
Requirements:
- Bachelor's degree in Computer Science, Data Engineering, or a related field (or equivalent practical experience)
- 6+ years of professional experience in data engineering or a closely related discipline
- In-depth knowledge of Snowflake, including warehouse design, role-based access control, performance tuning, and cost optimization
- Experience migrating data pipelines and infrastructure to Snowflake from other platforms (legacy warehouses, on-premise systems, or hybrid environments)
- Advanced proficiency with dbt for modular, tested, and version-controlled data transformations
- Hands-on experience with Dagster for pipeline orchestration, asset management, and data observability
- Proven experience designing and implementing data pipelines, data storage solutions, and analytics platforms on modern cloud data stacks
- Advanced proficiency in Python with hands-on experience in data-processing libraries such as Pandas, Polars, PySpark, or similar
- Strong SQL skills across both analytical and transactional workloads
- Experience with relational and/or columnar databases such as PostgreSQL, SQL Server, or Snowflake
- Deep understanding of data modeling, ETL/ELT processes, and data warehousing principles
- Experience implementing data architecture patterns (e.g., medallion/multi-hop, data mesh, lakehouse) to support scalable, well-governed data platforms
- Strong version-control practices (Git) with experience building CI/CD workflows for data workloads (e.g., GitHub Actions)
- Proven ability to translate business and technical requirements into production-grade, maintainable data pipelines
- Excellent problem-solving and analytical skills
- Ability to communicate clearly with both technical and non-technical stakeholders
- Proactive collaborator who mentors colleagues and raises the technical bar across the team
- Familiarity with Infrastructure-as-Code tools such as Pulumi or Terraform
- Familiarity with event-driven architectures and integrating orchestration platforms with cloud services
- Experience building and consuming RESTful APIs
- Exposure to containerisation (Docker) is a plus
- Familiarity with data quality and observability tooling (e.g., dbt tests, Great Expectations, Monte Carlo) is desirable