Professionals specialized in ETL tools to handle large volumes of data.
Also responsible for maintaining data pipelines and ensuring secure access to information.
Responsibilities may also include creating and exposing metrics (dashboards) based on the collected data, thereby ensuring the quality and integrity of these insights.
Requirements
AWS & Data Engineering
Building data ingestion and transformation pipelines using AWS services (Glue, Spark)
Orchestration of jobs and dependencies, failure handling, retries and observability
Data Quality, Validation and Testing
Business rule and consistency validations; automated data tests (quality, volume, duplication)
Monitoring of SLAs and quality indicators
Modeling and Performance
Designing and optimizing tables for large data volumes
Partitioning and distribution strategies
Cost, performance and processing time optimization
Best practices for analytical modeling and/or data lake architecture
Desired: AWS Practitioner and AWS Data Engineer certifications
Experience with data quality and observability
Tech Stack
AWS
ETL
Spark
Benefits
Multi-benefit card – choose how and where to use it.
Scholarship support for undergraduate, postgraduate, MBA and language courses.
Incentive programs for certifications.
Flexible working hours.
Competitive salaries.
Annual performance review with a structured career plan.