Develop data analytics infrastructure that supports safety assurance analytics addressing internal and external stakeholder needs across the phases of automated vehicle development and deployment, including both real-world and simulation data.
Develop interactive visualizations in support of enhancing transparency and reduction in barriers to source data interrogation.
Pilot and define metrics for monitoring of development operations and deployment, and establish sufficiency criteria for launch readiness.
Identify relevant data for supporting safety monitoring and the development of a reliable supply chain of continuously flowing data from a variety of sources (internal and external) to support safety assurance related activities.
Develop cloud-based continuous up-time analytics pipelines that manage data from a raw form, through analyses, and into browser based interactive visualizations and periodic reporting artifacts.
Select appropriate engineering
and physics-based signal processing, sampling, filtering, smoothing etc to prepare raw signals for analyses and/or storage in a down sampled form.
Integrate and transform data streams to construct physically meaningful representation of vehicle motion, driving context, and intermediate system performance, including reduction of time-series representations to features.
Apply engineering domain expertise to distinguish erroneous sensor data from real outliers.
Optimize code for efficiency and package for automated cloud-based execution.
Requirements
Bachelor’s degree in Computer Science, Mechanical Engineering, Vehicle Engineering, Physics, or a related field; or equivalent practical experience
5 + years of experience in large scale data analyses of human and/or automated driving performance related data
5 + years in ADAS, autonomous vehicles, robotics or related field
Programming & Frameworks: Python, SQL
Cloud & Big Data: Extensive experience in cloud-based large scale process including notifications, queuing, serverless cloud functions, event driven processing, code as infrastructure, containerization, process monitoring, process optimization, identity and access management, service to service access, etc.
Statistics: Working familiarity with descriptive statistics, managing bias in large data mining activities, experimental design, sampling strategies.
Dev Ops and Infrastructure as Code: CI/CD, versioning, Docker & Kubernetes, GitHub, Jira, Jenkins, Poetry, Terraform