Understand the business's technology needs and propose solutions aligned with best practices for data collection, storage, consumption, and governance;
Conduct technical assessments of the data environment, including analysis of data collection, organization, processing, and interpretation of datasets to support decision-making;
Define and design the architecture of a scalable and secure data platform;
Design, develop, and manage batch and streaming data pipelines using Azure Data Factory and Azure Databricks;
Build and optimize complex data processing workflows using ETL/ELT tools;
Collaborate with Data Analysts and business units to ensure alignment between technical requirements and strategic objectives;
Work directly with stakeholders to define and evolve the organization's data strategy;
Lead and mentor a team of data engineers, promoting best practices, technical quality, and governance;
Ensure clear communication, structured analytical reasoning, and an ethical approach to handling sensitive data.
Requirements
Experience with SQL for query manipulation and optimization;
Knowledge of PySpark and/or Python for data transformation and processing;
Strong knowledge of data modeling;
Experience building ETL processes and integrating multiple data sources;
Solid experience in data architecture and engineering in Azure environments;
Proficiency with Azure Data Factory for orchestration, ingestion, and data transformation;
Advanced experience with Azure Databricks for distributed data processing;
Experience with data governance, quality, and security practices;
Familiarity with integration of relational and non-relational databases;
Experience with source control and CI/CD best practices applied to data pipelines.