AirflowAmazon RedshiftAWSBigQueryGoogle Cloud PlatformKafkaKubernetesMongoDBOraclePythonSQLAIData EngineeringData LakeRedshiftdbtGCPGoogle CloudAthenaGitVersion Control
About this role
Role Overview
The professional's focus will be integrating and ingesting new data into the Data Lake and improving existing pipelines.
Responsible for gathering client-area requirements; defining and negotiating priorities and deadlines; and managing the activities of their projects.
As a Senior, also responsible for reviewing area processes; contributing to the evolution of the frameworks we use; and providing technical support to other engineers.
Requirements
Education: Bachelor's degree in Computer Science, Computer Engineering, Information Systems, Software Engineering, Data Science, or a similar field.
Advantage: Postgraduate degree in Data-related areas (Data Engineering, Big Data, Data Science, and/or AI).
Cloud experience (GCP and/or AWS).
Proficiency in Python.
Proficiency in SQL/MQL (primarily for Oracle, MongoDB, BigQuery, Athena, and Redshift).
Experience integrating data from multiple sources: Kafka; APIs; transactional databases (structured/semi-structured); files (CSV, JSON).
Experience with Airflow for workflow orchestration.
Experience using Git for version control.
Hands-on experience using AI agents to improve productivity in team activities.
Desirable: knowledge of Kubernetes; experience with dbt and data modeling.
Tech Stack
Airflow
Amazon Redshift
AWS
BigQuery
Google Cloud Platform
Kafka
Kubernetes
MongoDB
Oracle
Python
SQL
Benefits
Profit sharing (PPR)
Healthcare plan
Dental care
On-site cafeteria
Life insurance
Transportation allowance
Pharmacy benefit (Univers)
Corporate agreements with partner companies (New Value)