Kin Insurance is a remote-first company focused on simplifying and improving homeowners' insurance and services. They are seeking a Data Engineering Manager to lead the evolution of their data platform, ensuring reliable data pipelines and systems to support analytics and decision-making across the organization.
Responsibilities:
- Lead and develop a team of data engineers, driving execution, operational excellence, and long-term growth of the data engineering discipline
- Own the reliability, scalability, and evolution of Kin’s data platform infrastructure and lakehouse environment within Databricks
- Design and optimize batch, streaming, and event-driven data pipelines using technologies such as PySpark, Databricks, SQLMesh, Kafka, and Fivetran
- Establish engineering standards for testing, observability, CI/CD, monitoring, incident response, and operational reliability across the data platform
- Define and implement governance, lineage, classification, retention, and access control patterns in partnership with Security, Legal, and Compliance teams
- Drive infrastructure-as-code and GitOps practices for platform resources using tools such as Terraform
- Partner with application engineering teams to build resilient integrations between source systems and the data platform while proactively managing schema and dependency changes
- Collaborate with analytics engineering, product, and data science stakeholders to align platform investments with business priorities and downstream data needs
- Guide the team through large-scale technical initiatives with clear prioritization, iterative delivery, and thoughtful operational tradeoff decisions
- Identify opportunities to improve engineering efficiency and platform scalability through automation, tooling improvements, and thoughtful use of AI-enabled workflows where appropriate
Requirements:
- 6–8+ years of experience in data engineering, platform engineering, or infrastructure-focused data roles, including experience managing and developing data engineering teams
- Hands-on experience building and operating modern data platforms using technologies such as Databricks, Snowflake, BigQuery, or similar lakehouse and warehouse ecosystems
- Experience designing and maintaining scalable batch, streaming, and event-driven data pipelines using tools such as PySpark, SQLMesh, Kafka, Fivetran, or Databricks Jobs
- Experience implementing CI/CD, automated testing, monitoring, and observability practices for data infrastructure and pipelines
- Experience defining and implementing data governance, privacy, classification, retention, and access control frameworks in regulated or compliance-sensitive environments
- Proficiency with Python and PySpark for large-scale data engineering use cases
- Experience managing infrastructure-as-code and GitOps workflows using tools such as Terraform
- Ability to partner effectively across technical and non-technical teams while navigating ambiguity, operational tradeoffs, and competing priorities
- Experience leading teams through complex platform migrations, architectural evolution, or operational scaling initiatives
- Exposure to AI/ML workflows, predictive analytics systems, or MLOps integration patterns
- Experience operating in organizations where data engineering and analytics engineering are distinct but closely aligned functions