Design, develop, and maintain scalable data pipelines and data architectures supporting analytics, experimentation, and machine-learning use cases.
Work on data workflows that leverage LLMs in production, such as classification or analysis of user inputs.
Build and maintain the transformation layer (ELT), developing SQL models tailored to different use cases and audiences.
Implement best practices to ensure data quality, consistency, and security across data workflows.
Collaborate closely with product managers, software engineers, analysts, and data scientists to understand data needs and deliver effective technical solutions.
Monitor, troubleshoot, and improve data pipelines to ensure reliability and performance.
Contribute to the continuous improvement of the data stack, adopting new tools and practices as the business and data needs evolve.
Requirements
4+ years of experience as a Data Engineer, Analytics Engineer, or in a similar role.
Strong proficiency in Python and SQL.
Experience working with modern data warehouses (e.g. Redshift, BigQuery, Snowflake).
Hands-on experience with cloud services (AWS, GCP, or Azure).
Experience building, deploying, and maintaining data pipelines end-to-end.
Solid understanding of data modeling concepts such as dimensional modeling and slowly changing dimensions.
Strong communication skills and fluency in English; comfortable working in an English-speaking environment.
A product-minded approach to data, ensuring data solutions support real business and user needs.