COVU is a venture-backed technology startup transforming the insurance industry. They are seeking an experienced Senior Data Engineer to develop key components of their core data infrastructure, particularly the Policy Journal, which serves as the single source of truth for policy and accounting information.
Responsibilities:
- Develop the Policy Journal: Be a primary builder of our master data solution that unifies policy, commission, and accounting data from sources like IVANS and Applied EPIC. You will implement the data models and pipelines that create the "gold record" powering our platform
- Ensure Data Quality and Reliability: Implement robust data quality checks, monitoring, and alerting to ensure the accuracy and timeliness of all data pipelines. You will champion and contribute to best practices in data governance and engineering
- Build the Foundational Analytics Platform: Implement and enhance our new analytics framework using modern tooling (e.g., Snowflake, dbt, Airflow). You will build and optimize critical data pipelines, transforming raw data into clean, reliable, and performant dimensional models for business intelligence
- Modernize Core ETL Processes: Systematically refactor our existing Java & SQL (PostgreSQL) based ETL system. You will identify and resolve core issues (e.g., data duplication, performance bottlenecks), strategically rewriting critical components in Python and migrating orchestration to Airflow
- Implement Data Quality Frameworks: Working within our company's QA strategy, you will build and execute automated data validation frameworks. You will be responsible for writing tests that ensure the accuracy, completeness, and integrity of our data pipelines and the Policy Journal
- Collaborate and Contribute to Design: Partner with product managers, the Lead Data Engineer, and business stakeholders to understand complex business requirements. You will be a key technical contributor, translating business needs into well-designed and maintainable solutions
Requirements:
- 5+ years of experience in data engineering, with a proven track record of building and maintaining scalable data pipelines in production
- Expert-level proficiency in Python and SQL
- Strong experience with modern data stack technologies, including a cloud data warehouse (Snowflake or Redshift), a workflow orchestrator (Airflow is highly preferred), and data transformation tools
- Hands-on experience with AWS data services (e.g., S3, Glue, Lambda, RDS)
- Experience in the insurance technology (insurtech) industry and familiarity with insurance data concepts (e.g., policies, commissions, claims)
- Demonstrated ability to contribute to the design and implementation of robust data models (e.g., dimensional modeling) for analytics and reporting
- A pragmatic problem-solver who can analyze and refactor complex legacy systems. While you won't be writing new Java code, the ability to read and understand existing Java/Hibernate logic is a strong plus
- Excellent communication skills and the ability to collaborate effectively with both technical and non-technical stakeholders
- Direct experience working with data from Agency Management Systems like Applied EPIC, Nowcerts, EZlynx, etc…
- Direct experience working with Carrier data (Accord XML, IVANS AL3)
- Experience with business intelligence tools like Tableau, Looker, or Power BI
- Prior experience in a startup or fast-paced agile environment