Design and implement robust data pipelines and the underlying platform to support both batch and real-time data processing, ensuring optimal performance and scalability.
Act as strategic partner for stakeholders and cross-functional teams to identify opportunities to deliver high-quality data products and identify opportunities to enhance business strategies
Set technical standards and best practices for data engineering, data quality, and data governance
Guide and mentor data engineers, fostering a culture of technical excellence, ownership, and continuous learning
Lead the designing and implementing Recharge’s data architecture, policies, and processes.
Work closely with the Senior Product Manager to develop the roadmap, proactively proposing innovative data solutions aligning with business objectives.
Own end-to-end data workflows, from ingestion through transformation to delivery, ensuring data accuracy, consistency, and accessibility.
Develop monitoring and alerting mechanisms to detect and resolve issues within the data infrastructure proactively.
Work with cloud services, particularly AWS, to build, manage, and optimise data lakes and warehouses.
Requirements
8+ years of experience in data engineering, building data pipelines, and managing ETL workflows at scale.
A solid understanding of concepts such as data management, data governance, metadata collection and management, and data as a product.
Excellent communication and collaboration skills and the ability to translate complex technical subjects for stakeholders
Experience with data observability, data quality management, and monitoring tools and best practices to ensure data reliability (we use Datadog)
Proven expertise in designing, building, and managing data lakes, data warehouses, and other data infrastructure solutions with or without using SaaS applications.
A self-starter with strong problem-solving skills, continually seeking ways to optimise and innovate in data engineering processes.
Solid understanding of cloud environments, especially AWS, focusing on services like Athena, Redshift, SageMaker, S3, and AWS Step Functions.
Knowledge or experience with SaaS tools such as Airflow, Airbyte, Fivetran, dbt, Sifflet, Spark/PySpark or similar tools.
Strong hands-on experience with data processing frameworks and languages like SQL and Python.
Experience with Infrastructure as Code tools like Terraform
Experience with defining/maintaining CI/CD for data pipelines, including automated testing, environment provisioning and other best practices. We rely on GitHub Actions and Terraform.
Ability to operate responsibly in alignment with the industry's best practices and team conventions. The ability to drive initiatives toward completion and deliver working solutions is a must-have.
Tech Stack
Airflow
Amazon Redshift
AWS
Cloud
ETL
PySpark
Python
Spark
SQL
Terraform
Benefits
30 days of holiday, a great pension scheme, and one of the best relocation packages in Amsterdam
Flexible working hours and an office overlooking the Amstel
Macbook Pro or Windows Laptop
Budget for noise-cancelling headphones, travel to and from the office, opportunities to work-from-home and self-learning
Free healthy breakfast, lunch, and snacks by our in-house chef
Unlimited access to mental health support by certified psychologists via OpenUp
Free Dutch classes to help out with daily life in the Netherlands