Everly is focused on transforming the life insurance industry by providing transparent, consumer-first solutions. They are seeking a Senior Analytics Engineer to lead the analytics platform, manage the data lifecycle, and ensure compliance and governance as the company scales.
Responsibilities:
- Own the analytics platform roadmap — prioritizing improvements, architecture decisions, and tooling enhancements in partnership with IT and the Head of Data, Analytics and AI Solutions
- Define and enforce platform SLAs, data quality standards, and operational runbooks
- Serve as technical mentor to the Analytics Engineer/Data Scientist
- Own the end-to-end data pipeline from ingestion through transformation and delivery into BI tools and data products across our AWS stack (Step Functions, Glue, Snowflake, dbt, Sigma)
- Partner with third-party vendors to maintain DataOps infrastructure using infrastructure-as-code practices
- Monitor pipeline health and drive timely resolution of data issues with providers and consumers
- Maintain and evolve an enterprise data model, data dictionary, and business rules agnostic across sources, insurance products, and distributors
- Enforce Snowflake security best practices — masking policies, RBAC, and least-privilege access — aligned to GLBA NPI/NPI-H classification requirements
- Champion data governance, lineage, and quality standards; support NAIC MDL-668 audit trails, SOC 2 Type II evidence collection, and adherence to CCPA/CPRA, HIPAA, and state breach notification requirements
- Design and maintain systems of record for core insurance domains (policy, claims, billing, distribution, customer) with proper versioning, effective dating, and audit history
- Own database design and management for AI tools and models, including vector databases, feature stores, and model input/output pipelines
- Design and maintain a semantic layer enabling natural language insight discovery
- Own executive and operational dashboards from board-level reporting down to day-to-day operations
- Set BI development standards for the Analytics Engineer/Data Scientist to execute against
- Maintain and enhance actuarial data pipelines including distributed compute and Glue Jobs for data curation
- Advise on data architecture decisions supporting future AI/ML and modeling use cases
Requirements:
- Bachelor's degree in computer science, Data Engineering, Statistics, Mathematics, or a related field — or equivalent work experience
- 8+ years of experience in analytics engineering, data engineering, or a closely related role, with demonstrated progression in scope and ownership
- Hands-on experience with dbt (data build tool) for transformation, data modeling, and testing at scale
- Strong proficiency with Snowflake, including dynamic data masking, RBAC, performance optimization, and cost governance
- Hands-on experience with AWS data services including Glue, Step Functions, S3, and Lambda
- Strong SQL proficiency; Python required at this level (data pipeline development, scripting, automation)
- Demonstrated experience owning a BI delivery roadmap and building dashboards for executive and operational audiences
- Demonstrated experience building and maintaining enterprise data models and semantic layers in a multi-product, multi-source environment
- Strong understanding of data governance, data quality frameworks, and privacy best practices
- Hands-on experience with infrastructure-as-code tools (Terraform, CloudFormation, or similar)
- Working knowledge of data privacy and security compliance frameworks relevant to US life and annuity carriers, including GLBA, NAIC MDL-668
- Experience owning or contributing to a technical product or platform roadmap
- Demonstrated ability to manage competing stakeholder priorities and communicate platform trade-offs to senior leadership
- Experience in the life insurance or annuities industry strongly preferred; broader financial services considered
- Experience with life
- Exposure to actuarial data workflows or direct experience supporting actuarial teams
- Experience with data observability tools (Monte Carlo, Great Expectations, dbt tests, etc.)
- Experience with Git-based version control and CI/CD workflows for data pipelines
- Experience working with distributed compute frameworks (e.g., Spark via AWS Glue or EMR)
- Familiarity with SOC 2 Type II control frameworks and evidence collection practices
- Exposure to HIPAA requirements as they relate to underwriting or health-adjacent data
- Familiarity with CCPA/CPRA and multi-state data breach notification obligations
- Experience transitioning work from external vendors or consultancies to in-house
- Exposure to AI/ML pipeline patterns or feature engineering in support of modeling teams