Samsara is the pioneer of the Connected Operations™ Cloud, which helps organizations leverage IoT data to improve their operations. The Senior Data Platform Engineer will be responsible for administering Databricks, overseeing the data-engineering ecosystem, and collaborating with cross-functional teams to deliver scalable data solutions.
Responsibilities:
- Administer and monitor Databricks workspaces and underlying AWS Infra. Develop platform admin tools and capabilities to effectively administer, monitor, track and troubleshoot Databricks resources: clusters, catalogs, users, groups, databases, storage and security
- Act as the primary Databricks SME, assisting and guiding teams on platform features and best practices
- Collaborate with data engineers, analysts, and data scientists to operationalize and optimize workflows and data pipelines
- Oversee data-engineering dev ecosystem, including dev-tooling, CI/CD pipelines, and monitoring frameworks
- Own implementation and management of IaaC (Terraform, CloudFormation) and CI/CD automation for Databricks and AWS resources
- Ensure security, high availability, disaster recovery, and compliance across data services
- Incorporate DevOps best practices including automated testing, deployment, observability, and monitoring
- Engage directly with internal cross-functional stakeholders to understand their data needs and design scalable solutions
- Collaborate with engineers, managers, and vendor representatives in evaluating new features and solutions. Lead rapid prototyping and proof-of-concepts to evaluate for performance, cost, security, scalability, observability
- Provide mentorship to junior team members, and provide technical guidance, training, and knowledge-sharing across teams
Requirements:
- Bachelor's or Master's degree in Computer Sciences, Software Engineering, Electrical Engineering, Computer Engineering, or related discipline
- 10+ years experience in a Software Engineering, Platform Engineering, Data Engineering, DevOps/DataOps or similar technical role, including at least 2+ years experience in data infrastructure or platform focused role
- Experience building, delivery and administering large-scale production-grade data platforms and services to data engineering, business analyst and data-science teams
- Solid knowledge of Databricks features and administration, including Unity Catalog, cluster management, security, conducting troubleshooting and root-cause analysis, and performance optimization
- Experience and familiarity with AWS services and Cloud Infrastructure provisioning, management, monitoring, and security (e.g., S3, IAM, RDS, Lambda, API-Gateway, VPC, EC2, ECS/EKS)
- Strong knowledge of SQL and Python, and hands-on data-engineering experience in designing and developing data pipelines and ETL routines from a variety of sources (SaaS corporate systems, APIs, RDBMS)
- Skilled with change-data capture, incremental and batch loading techniques. Experienced in troubleshooting underlying issues and translating them into technical solutions
- Excellent problem-solving, communication, and stakeholder management skills
- Experience delivering and managing Databricks and AWS Infra for 100s of users
- Experience as a technical lead
- Knowledge of DevOps tools and practices: GitHub/GitLab, CI/CD systems, Terraform, monitoring/logging tools (e.g., Datadog, CloudWatch)
- Experience implementing robust data governance and security measures to ensure data integrity and compliance