Collaborate with Data Architects and business partners to design and evolve enterprise data architecture and platform capabilities.
Translate architectural strategy into technical designs and delivery plans across teams.
Design, code, and optimize complex distributed data processing systems using Spark, Databricks, and cloud‑native data services.
Develop canonical data models, semantic structures, and reusable datasets to support reporting and machine learning.
Drive platform modernization initiatives such as Delta Lake and metadata‑driven design.
Create reusable frameworks and platform capabilities to accelerate analytics, ML, and governed self‑service data access.
Lead root‑cause analysis for major data issues and implement long‑term improvements in data quality, lineage, and observability.
Provide technical leadership, guidance, and mentorship to Staff, Senior, and mid‑level data engineers.
Influence cross‑organizational roadmaps and engineering investments; participate in architecture reviews and governance forums.
Requirements
Bachelor’s or Master’s degree in Computer Science, Information Systems, or equivalent experience.
10+ years of experience in data engineering or a related technical field.
Expert proficiency in SQL, Python, and Spark for large‑scale data processing.
Extensive experience designing and building cloud‑native data pipelines, data models, and distributed data systems (Delta Lake, Spark, Unity Catalog, Jobs, Workflows).
Experience with Azure (required).
Strong experience designing and tuning distributed data processing systems at scale.
Deep knowledge of data engineering best practices including version control, CI/CD, automated testing, DevOps/DataOps, and observability.
Proven ability to lead cross‑functional technical initiatives and influence architectural direction.
Strong problem‑solving, debugging, analytical, and collaboration skills; ability to thrive in agile, dynamic teams.