Develop, implement, and maintain robust, scalable, and reliable data pipelines in an AWS environment;
Lead the secure and efficient migration of large volumes of data and workloads from GCP and on-premises to AWS, ensuring data integrity, security, and minimal downtime;
Build and orchestrate pipelines using Apache Airflow and dbt, supporting data transformation, validation, and data governance processes;
Utilize services and technologies such as Amazon S3, AWS Lambda, Kubernetes, Snowflake, and orchestration tools;
Continuously monitor pipeline performance and implement optimizations to improve performance and reduce costs;
Collaborate closely with architects, engineers, and business stakeholders to ensure alignment with functional, non-functional, and compliance requirements.
Requirements
Experience with AWS and migrating data to the cloud;
Experience with GCP and/or on-premises environments;
Strong knowledge of ETL/ELT pipelines, orchestration, and automation;
Experience with Apache Airflow, dbt, and Python;
Familiarity with modern data architectures (Data Lake, Lakehouse);
Experience with analytical databases such as Snowflake;
Knowledge of data governance, security, and data engineering best practices.