Design, build, and maintain scalable data pipelines (batch and real-time) using modern cloud tools.
Partner closely with product and engineering teams to deliver data-driven features and services that support our safety mission.
Support our existing ETL/ELT processes, data lake, data warehouse, and data lakehouse environments.
Ensure data quality, reliability, and observability through robust monitoring, alerting, and documentation practices.
Participate actively in architectural discussions and contribute to the long-term strategy of our data platform.
Requirements
Degree in Computing Science, Data Engineering, Information Technology, or related discipline.
3 years of experience in a Data Engineer role.
Experience with event-driven architecture and related technologies (e.g., Apache Kafka, Amazon MSK).
Familiarity with Infrastructure as Code (IaC) and CI/CD best practices in multi-environment deployment workflows, including hands-on experience with Terraform and GitHub Actions.
Experience with monitoring and alerting platforms such as CloudWatch, Datadog, or PagerDuty.
Hands-on experience with ETL/ELT and orchestration tools, including AWS Glue Studio, Matillion, and AWS Step Functions.
Proficiency with AWS data services: Redshift, EMR, S3, Lambda, Kinesis, and MSK.
Strong proficiency in SQL, Python, Java, and/or Scala.
Tech Stack
Amazon Redshift
Apache
AWS
Cloud
ETL
Java
Kafka
Matillion
Python
Scala
SQL
Terraform
Benefits
Competitive base salary and annual compensation review
Comprehensive health and dental benefits*
Mental health and wellness support
Flexible work arrangements and hybrid work model for eligible positions
Paid vacation, personal and sick days*
Professional development opportunities
Education funding
Participation in the Company's employee stock ownership plan
A collaborative, inclusive, and mission-driven culture