HDI Global Insurance Company is a commercial property and casualty insurer headquartered in Chicago, IL. They are seeking an Analytics Engineer to join their Data & Insights team to build and maintain their modern analytics platform, focusing on data engineering and analytics with tools like dbt and Snowflake.
Responsibilities:
- Design, build, and maintain data ingestion and transformation pipelines using a combination of Python-based ETL workflows, OpenFlow, Snowflake-native ELT patterns and SQL Server stored procedures as part of an ongoing EDW migration to Snowflake
- Develop, test, and maintain dbt models to transform raw and prepared data into analytics-ready, consumable data models in Snowflake
- Implement and enforce analytics engineering best practices, including modular modeling, testing, documentation, and version control
- Optimize Snowflake models for performance, scalability, and cost efficiency
- Design and maintain dimensional and analytics-friendly data models (facts, dimensions, marts)
- Partner with Data Architect and Data Engineers to align analytics models with enterprise data standards
- Support ongoing evolution of the enterprise data warehouse and curated data layers
- Work directly in Snowflake to develop ELT logic, optimize queries, and manage data structures
- Leverage Snowflake features such as tasks, streams, views, and secure data access where appropriate
- Implement data quality tests and validation logic within dbt
- Participate in CI/CD workflows for analytics code using Git and automated deployments
- Ensure adherence to security, compliance, and data governance standards, especially for regulated insurance data
- Experience working as part of a Scrum or Agile delivery team, collaborating with product owners, analysts, and engineers
- Familiarity with agile ceremonies such as sprint planning, stand-ups, backlog grooming, and retrospectives
- Experience using Jira or similar work management tools (e.g., Azure DevOps, Confluence) to track work, manage backlogs, and document requirements
- Ability to manage work through user stories, tasks, and sprint commitments, delivering incremental, high-quality data assets
- Provide customer assistance with reporting tools, software, and reports
- Actively seeking out relationships with the customer community, proactively communicating relevant information and discussing their needs, initiatives, and service levels through periodic meetings
Requirements:
- Bachelor's degree in Computer Science, Engineering, Information Systems, or related field, or equivalent experience
- 3+ years in analytics engineering, data engineering, or data warehousing roles
- Hands-on experience with dbt in a production environment
- Strong experience working with Snowflake and SQL Server
- Experience working in Agile/Scrum teams, participating in sprint ceremonies and iterative delivery
- Proficient with Jira, Confluence or similar tools
- Advanced SQL and Python skills
- Experience with ELT/ETL-based data transformation patterns
- Familiarity with Git-based version control and CI/CD pipelines
- Experience supporting BI tools such as Qlik, Power BI, or similar
- Experience participating in a data platform, analytics, or enterprise transformation initiative, such as migrating from legacy data warehouses or ETL tools to a modern cloud-based analytics stack
- Experience supporting the transition of stakeholders and downstream consumers to new data models, tools, or reporting paradigms
- Ability to balance delivery of new capabilities while maintaining continuity for existing reporting and business processes
- Strong analytical and problem-solving skills
- Ability to communicate effectively with both technical and non-technical stakeholders
- Organized, detail-oriented, and able to manage multiple priorities
- Experience with Guidewire Cloud Data Access (CDA) and Guidewire PolicyCenter data models
- Background in property & casualty insurance
- Experience with cloud platforms (Azure preferred)
- Exposure to AI-enabled data platform features (e.g., Snowflake Cortex) or participation in analytics projects that support machine learning or generative AI use cases
- Familiarity with preparing, curating, and governing data for AI-driven analytics
- Familiarity with data governance, metadata management, or data cataloging practices