Data Pipeline Engineering: Architect, build, and maintain complex, real-time, and batch data pipelines using Azure Data Factory, Python/PySpark, and Databricks.
Architecture & Modelling: Design and implement modern data warehouse solutions, data models, and data lakes, optimizing for performance and scalability.
Data Ingestion & Integration: Ingest, cleanse, and transform data from diverse sources into usable data structures for analytics.
Security & Governance: Implement security features, including role-based access control (RBAC), data encryption, and governance via Azure Purview.
Requirements
Performance Optimization & Monitoring: Troubleshoot and tune data systems and SQL queries for efficiency; monitor data workflows.
Technical Leadership & Mentorship: Lead code reviews, mentor junior engineers, and define technical standards and best practices.
Collaboration: Work with data scientists, analysts, and stakeholders to deliver actionable business insights.
Azure Services: Azure Data Factory, Databricks, Synapse Analytics, Data Lake Storage.
Languages & Tools: Python/PySpark, SQL, Scala, CI/CD (DevOps) tools.
Processes: ETL/ELT, Data Modelling, Data processing.
Tech Stack
Azure
ETL
PySpark
Python
Scala
SQL
Benefits
Diversity Inclusion: At Exavalu, we are committed to building a diverse and inclusive workforce.
Flexibility depending on the needs of employees, customers, and the business.
Welcome back program to help people get back to mainstream after a long break due to health or family reasons.