Partner with internal teams and external clients to align on data formats, specifications, and integration requirements.
Monitor daily data pipelines, alerts, and file transfers; triage and resolve issues quickly to maintain reliable operations.
Identify manual steps in file processing, validation, and monitoring and replace them with automated, scalable solutions using Databricks, Python, and Azure tooling.
Implement and maintain robust monitoring to detect anomalies in inbound and outbound healthcare files, including 834/837 and flat file formats.
Provide hands-on leadership and coaching with SQL, data pipelines, and healthcare data structures.
Lead the Data Operations team through daily standups, clear prioritization, and coaching.
Requirements
Bachelor’s degree in Computer Science, Engineering, or a related field.
5+ years of experience in SQL with the ability to write complex queries.
3+ years of experience performing ETL and data engineering tasks.
3+ years of experience with Big Data technologies such as Databricks, Python, and Data Lakes.
Proficiency in: SQL, Azure Data Services (including Azure Data Factory), Databricks, Python.
Strong understanding of DevOps/Data DevOps concepts.
Experience with data observability tools and practices.
Familiarity with data quality frameworks and automation tools.
Experience with healthcare EDI formats (834, 837, etc.)
Understanding of eligibility, claims, accumulators, and healthcare reporting concepts.
Familiarity with PHI handling, HIPAA considerations, and governance practices.