Own the architecture and operational health of the data-orchestration layer across internal systems and the data warehouse
Lead a team of data engineers/ETL analysts on day-to-day project work
Define with leadership and enforce data pipelines' SLAs, error-recovery strategies, schema change management, metadata/lineage documentation
Lead cross-system interoperability work
Partner with reporting/analytics teams to ensure data models align with consumption needs
Own and evolve tooling around data quality, observability, lineage, and cataloging
Evangelize “data as product/infrastructure” mindset across the organization
Requirements
5+ years of experience in data engineering / data infrastructure roles
2+ years managing/leading other engineers or architects
Expert experience with orchestration tooling (e.g., Airflow, Dagster), ingestion frameworks, cloud platforms (AWS/GCP/Azure), and data-warehouse technologies (Snowflake, BigQuery, Redshift, etc)
Expert experience with data modeling, semantic layers (dbt or equivalent), metadata/lineage tooling, data quality frameworks
Strong knowledge of API design, data contracts, system interoperability, micro-services or event-driven data architectures is a plus
Proven track record designing and implementing data pipelines/infrastructure across multiple source systems into a data warehouse/lakehouse environment
Excellent verbal and written communication skills
Experience building or improving observability, monitoring, alerting for data pipelines
Bachelor’s degree in computer science, engineering, mathematics or equivalent (or equivalent real-world experience). Master’s or additional certifications are a plus.