Interwell Health is a kidney care management company dedicated to reimagining healthcare. The Staff Data Engineer will serve as a senior technical leader responsible for shaping and governing the data ecosystem, delivering high quality data products that enhance clinical and operational outcomes.
Responsibilities:
- Design and evolve a scalable, secure, cloud‑native lakehouse platform leveraging Databricks, Microsoft Fabric (OneLake, Lakehouse, Data Factory), and dbt
- Define modeling patterns, governance frameworks, and engineering best practices across the data lifecycle
- Lead design reviews and guide teams in adopting scalable architectural patterns
- Drive long‑term platform strategy and evaluate emerging technologies
- Design and implement batch and streaming data pipelines for healthcare data sources (EHR, claims, HL7/FHIR, APIs, flat files, databases)
- Develop modular ingestion, quality, lineage, metadata, and observability frameworks that scale across domains
- Produce clean, analytics‑ready datasets and data models for BI, analytics, and machine learning workloads
- Implement HIPAA‑aligned access patterns and secure handling of PHI
- Architect Databricks workloads (clusters, jobs, Unity Catalog, Delta Lake) for reliability, performance, and cost efficiency
- Integrate Databricks and Microsoft Fabric with Azure services and enterprise systems
- Partner with product managers, data scientists, analysts, clinicians, and business stakeholders to translate healthcare data needs into scalable solutions
- Lead Cross functional initiatives that modernize and unify the organization’s data ecosystem
- Mentor senior and mid-level engineers; elevate team capability through technical coaching and standards
- Drive roadmap planning, platform evolution, and long-term data strategy
- Champion engineering excellence, reliability practices, documentation quality, and governance
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field
- 7+ years of experience in data engineering
- 2+ years operating in a senior or staff level engineering role
- Deep hands-on proficiency with Databricks, Spark, Delta Lake, dbt, and Python
- Proven ability to design and operate largescale cloud data platforms (Azure preferred)
- Hands on experience with Data Engineering, Data Factory, Lakehouse, OneLake
- Advanced data platform architecture and Lakehouse design expertise
- Demonstrated ability to design modular, extensible frameworks and guide the long-term evolution of enterprise data platforms
- Strong command of distributed data processing and cloud native engineering
- Experience working in HIPAA regulated environments and handling PHI
- Healthcare data fluency, including regulated data handling and compliance
- Technical leadership, mentorship, and influence across teams
- Strong communication skills with both technical and clinical stakeholders
- Experience with platform reliability, CI/CD for data pipelines, and infrastructure as code
- 100% remote (ET or CT work hours preferred)
- Experienced in implementing and supporting Epic integrations, leveraging Cogito Cloud and Caboodle data models, and delivering reliable incremental data pipelines from Caboodle/Clarity