Life360 is a company focused on keeping people connected with their loved ones through innovative mobile applications and tracking devices. They are seeking a Staff DevOps Engineer to design, build, and operate scalable infrastructure for their adtech data platform, collaborating closely with engineering and data teams to ensure reliable data pipelines and services.
Responsibilities:
- Partner closely with data engineering and data science teams to enable reliable data pipelines, analytics, and ML workflows
- Support, operate, and optimize Databricks and Snowflake environments in production
- Operate and support Databricks Jobs and Apache Airflow DAGs created by development teams, ensuring reliable orchestration of data pipelines and Databricks workflows
- Monitor, troubleshoot, and optimize systems for performance, reliability, and cost efficiency
- Partner closely with security and compliance teams to implement data security, auditing, and access review processes
- Design, implement, and maintain CI/CD pipelines using Jenkins and GitOps practices
- Provision and manage cloud and data platform infrastructure using Terraform and infrastructure-as-code best practices
- Deploy and operate platform and data services on Kubernetes (EKS) clusters
- Implement and maintain GitOps workflows using Argo CD
- Package, deploy, and manage services using Helm and Kustomize
- Automate configuration management and operational tasks using Ansible
- Implement and maintain observability and monitoring for infrastructure and data platforms
- Participate in incident response and on-call rotations supporting data and platform systems
- Contribute to operational standards, documentation, and best practices
- Lead technical initiatives, mentor team members, and influence architectural direction
Requirements:
- 5+ years of experience in DevOps, SRE, or infrastructure engineering roles with experience supporting production data platforms at scale
- Hands-on experience operating Databricks and Snowflake in production environments
- Experience supporting data engineering and data science teams
- Experience deploying and supporting Databricks Jobs and Apache Airflow workflows in production environments
- Strong experience with Apache Spark
- Hands-on experience with Kubernetes (EKS), Argo CD, Helm, and Kustomize
- Strong experience with Terraform and infrastructure-as-code practices
- Experience with CI/CD pipelines, including Jenkins and GitOps tooling
- Experience using Ansible for configuration management
- Strong programming skills in Python, Go, or similar languages
- Strong Linux/Unix administration experience and shell/bash scripting
- Hands-on experience with observability tools (e.g., Prometheus, Grafana, ELK stack)
- Strong problem-solving skills and ability to work independently in fast-paced environments
- Demonstrated technical leadership, mentorship experience, and ability to drive architectural decisions
- Experience supporting ML workloads (feature generation, offline training)
- Knowledge of Apache Iceberg, including enabling cross-platform table access between Databricks and Snowflake
- Experience with data quality, lineage, or governance tooling to support auditability, access controls, and compliance requirements (e.g., SOX)
- Experience designing infrastructure to support microservices-based architectures
- Experience with service mesh technologies (e.g., Istio, Linkerd, AWS App Mesh)
- Experience with cloud cost optimization and performance tuning