Lead the technical effort for your Delivery Team, driving clarity and confidence with clients and team members.
Design and implement scalable, reliable, and secure data pipelines across Azure, AWS, and GCP.
Develop and maintain ETL/ELT workflows using Apache Airflow, dbt, and cloud-native services.
Build and optimize data processing frameworks with Apache Spark, Databricks, and Snowflake.
Collaborate with data and AI engineers to operationalize machine learning models and integrate AI services (Azure AI, AWS SageMaker, GCP Vertex AI, Databricks MLflow).
Implement data quality, governance, and observability practices, ensuring compliance with enterprise and regulatory standards.
Design and maintain data models and analytical layers for self-service analytics and business intelligence.
Evaluate emerging tools and frameworks to ensure best-in-class performance and cost efficiency.
Facilitate technical growth and improvement within your team, mentoring and developing talent.
Produce accurate work estimates and collaborate to meet timeline and budget expectations.
Perform code reviews, uphold standards, and drive process improvements.
Actively learn and share new technologies and trends, evaluating and developing your team’s skills.
Requirements
2 to 3 years of experience leading technical development.
Deep, hands-on experience with cloud data platforms (Databricks, Snowflake, AWS, Azure, GCP).
Strong understanding of Delta Lake, Spark optimization, and distributed data processing.
Proficiency in Python, SQL, and PySpark; experience with Apache Airflow and dbt.
Knowledge of MLflow, SageMaker, Vertex AI, or Azure Machine Learning.
Solid understanding of data governance, lineage, cataloging, and security (GDPR, HIPAA, SOC2).
Excellent communication, collaboration, and leadership skills.
Experience in Agile environments and cross-functional teams.