Own and scale a modern healthcare data platform — building core services, automation, and observability that power real-world patient and provider outcomes
Architect and build core services, automation tools, and integrations that power our client's data ecosystem
Own high-impact platform components, improve pipeline reliability and observability
Partner closely with data engineering, analytics, and DevOps to advance the scalability and developer experience of our client's data platform
Requirements
7+ years of experience in data engineering or software development
Strong programming skills in Python & SQL and at least one major data platform (Snowflake, BigQuery, Redshift, or similar)
Develop tooling for schema evolution, data contracts, and developer self-service
Deep experience with streaming, distributed compute, or S3-based table formats (Spark, Kafka, Iceberg/Delta/Hudi)
Experience with schema governance, metadata systems, and data quality frameworks
Understanding of orchestration tools (Airflow, Dagster, Prefect, etc.)
Solid grasp of CI/CD and Docker
At least 2 years of experience in AWS
Experience with building data pipelines using dbt
Tech Stack
Airflow
Amazon Redshift
AWS
BigQuery
Docker
Kafka
Python
Spark
SQL
Benefits
Totally remote within the contiguous United States