Design, build, and deploy robust data platforms, including data lakehouses and warehouses, and automate their operation using tools like Azure DevOps, Terraform, and CloudFormation.
Implement and optimize advanced ETL/ELT data pipelines and orchestrate complex data jobs using platforms like Databricks notebooks and dbt.
Design and maintain data models, potentially utilizing data virtualization tools like Cube.
Leverage managed and serverless cloud offerings to create performant and scalable cloud‑native application solutions and data pipelines.
Apply best practices for data security and ensure compliance with governance requirements.
Build high‑performance data pipelines and integrate with BI tools like Tableau, Looker, and Power BI.
Provide leadership in applying software development best practices, including CI/CD and managing Infrastructure as Code.
Requirements
5+ years of industry experience as a hands‑on practitioner in Data Engineering.
Advanced proficiency in Python and SQL (TypeScript/JavaScript/Java a plus).
Proven experience with ETL/ELT implementation and data modeling for data warehouses/lakehouses.
Understanding and experience with the Medallion Architecture (Bronze, Silver, and Gold layers).
Experience with advanced DevOps, CI/CD, and strong Linux system administration skills.
Expertise in 2–3 major cloud and data platforms (e.g., Azure, AWS, GCP, Snowflake, Databricks, and Fabric).
Tech Stack
AWS
Azure
Cloud
ETL
Google Cloud Platform
Java
JavaScript
Linux
Python
SQL
Tableau
Terraform
TypeScript
Benefits
Fully remote
Flexible schedule
Paid parental and bereavement leave
Worldwide recognized clients to build skills for an excellent resume