Humana Inc. is committed to putting health first, and they are seeking a Senior Data Engineer to design, build, and maintain data processing architectures and solutions. This role involves developing scalable data pipelines and implementing data governance standards to support analytics and business intelligence.
Responsibilities:
- You will be responsible for developing and optimizing scalable data pipelines to support critical analytics and business intelligence
- You will design, build, and maintain data models and architectures that enable efficient storage, retrieval, and processing of large-scale datasets
- You will work with key stakeholders to define the business problem, determine requirements, and ensure quality deliverables
- You will manage and prioritize data engineering roadmaps in partnership with business leaders, translating strategic objectives into actionable work items and clear delivery plans
- You will implement best‑practice data governance, data quality, and data security standards, ensuring that pipelines and datasets are reliable, well‑documented, and compliant with organizational requirements
- You will facilitate regular stakeholder communication, providing updates on progress, risks, and dependencies while ensuring expectations are aligned across technical and non‑technical audiences
- You will conduct code reviews and enforce data engineering best practices
Requirements:
- Bachelor's degree in engineering or related field (e.g., Computer Science, Software Engineering or a related field)
- Must be passionate about contributing to an organization focused on continuously improving consumer experiences
- Proven track record of managing technical initiatives from planning to delivery, maintaining high standards of quality, and guiding engineers through mentorship and best practice adoption
- Ability to lead technical discussions, architecture reviews, and solutioning
- 5+ years of experience as a Data Engineer or similar role
- 3+ years of hands-on experience with Databricks
- Proficient in designing and implementing ETL/ELT workflows with strong expertise in Python, PySpark, and SQL
- Experience designing and implementing cloud-based big data solutions using Azure, AWS, or GCP
- Solid understanding of version control (Git), CI/CD pipelines, and software development lifecycle (SDLC) best practices
- Excellent communication skills and the ability to convey complex data in a clear and concise manner
- One or more professional certifications in Azure and/or Databricks
- Experience using AI to improve data engineering workflows
- Experience working with healthcare data
- Master's Degree