James Search Group is partnering with a leading Property & Casualty insurance carrier that is investing in data engineering, analytics, and machine learning. The role of Senior Data Engineer involves designing, building, and maintaining a modern data platform while collaborating with analytics engineers and data scientists.
Responsibilities:
- Design, build, and optimize scalable data pipelines for batch and real-time processing
- Implement data ingestion frameworks including CDC from core systems, APIs, and third-party platforms (Salesforce, Workday, Duck Creek, etc.)
- Develop and optimize Apache Spark jobs on Databricks, leveraging Delta Lake, DLT pipelines, and lakehouse architectures
- Ensure data quality, lineage, and governance using Unity Catalog, CI/CD, and role-based access/security controls
- Partner with analytics engineers (dbt) to deliver clean, structured upstream data
- Mentor peers, contribute to architecture decisions, and foster a culture of craftsmanship
- Leverage AWS services (S3, Glue, Lambda, etc.) and DevOps tools (GitHub, CI/CD) for scalable, production-grade deployments
Requirements:
- 3-5+ years of professional experience in data engineering, ideally within insurance or financial services
- Strong proficiency in Python, SQL, and Spark for building and optimizing pipelines
- Hands-on expertise with Databricks (Unity Catalog, Delta Lake, DLT pipelines) and/or Azure Data Services
- Strong knowledge of AWS data services, with the ability to adapt across cloud platforms (Azure, GCP)
- Familiarity with modern data architectures (medallion, lakehouse, streaming)
- Experience with GitHub, CI/CD pipelines, and testing frameworks
- A problem-solving mindset: balancing pragmatism with scalability, and a passion for working in collaborative teams
- This is a remote-first position, but you can also work from one of the carrier's multiple U.S. office locations