AzureETLPythonSparkELTData EngineeringSnowflakeDatabricksGitGitHubGitLabBitbucketAgileCommunicationCollaborationRemote Work
About this role
Role Overview
Monitor and respond to execution alerts/failures (jobs and pipelines), prioritizing by impact and ensuring rapid recovery
Analyze incidents, identify root causes, and implement permanent fixes (preventive and corrective)
Review monitoring reports, map risks, and record actions as PBIs/user stories
Interact with users and teams (collaboration channels), capture gaps, and propose process/platform improvements
Regularly work with Azure (ADF/Databricks), Python, Spark, and Snowflake, including administration and support of routines that did not run as scheduled.
Requirements
Senior experience in data engineering (design, support, and development of batch and streaming pipelines)
Strong proficiency in Python and Spark (advanced level)
Experience with Snowflake (data platform, consumption/transformations, best practices)
Experience with Azure services, with emphasis on:
o Azure Data Factory (ADF) and Databricks
o Integration with sources/services (e.g., Event Hubs and third-party connectivity)
o Best practices for security, storage, resource optimization, and cost management
Strong knowledge of ETL/ELT, troubleshooting, and pipeline performance
Experience with Git (GitHub/GitLab/Bitbucket), branching, pull requests, and code review
Experience working in agile environments
Advanced English (written communication and stakeholder interaction)