Lead the development and implementation of advanced perception algorithms for object detection, tracking, classification, and segmentation using camera, lidar, and radar data
Connect and drive business impact through technology solutions
Design and implement sensor fusion algorithms to combine data from multiple sensors to create a robust and accurate perception system
Optimize perception algorithms for real-time performance on embedded hardware
Collaborate with other engineers and teams to integrate perception systems into larger autonomous systems
Stay abreast of the latest research and advancements in machine learning and computer vision and apply them to real-world problems
Productionize and deploy solutions onto autonomous vehicle fleets
Derive good insight from large data sets from a variety of sources
Mentor and guide junior engineers on the team
Requirements
Masters or Ph.D. in Machine Learning, Computer Science, Robotics, Applied Mathematics, Statistics, Physics or a related field; or equivalent industry experience
Strong leadership skills at executing large, complex technical initiatives
In-depth understanding of common Machine Learning and Deep Learning algorithms
Experience with designing, training, and analyzing neural networks for at least one of the following applications: object detection, semantic/instance segmentation, visual classification, motion/gesture recognition, sensor fusion, multitask learning, multi-object tracking, and/or end-to-end perception
Experience with deep learning frameworks such as TensorFlow or PyTorch
Fluency in Python, including standard scientific computing libraries and Python bindings development experience
Proven track record of developing and deploying perception systems for autonomous vehicles or robotics
Advanced knowledge of software engineering principles including software design, source control management, build processes, code reviews, testing methods