General Motors is redefining what’s possible in driver assistance and automated driving. They are seeking a Staff Software Engineer to provide technical leadership for the Secondary Driving System, focusing on building robust C++ software and leading high-impact projects within the SDS stack.
Responsibilities:
- Serve as a technical lead for SDS software across multiple components of the stack, setting direction for algorithms, architectures, and system interfaces across features and releases
- Own the end-to-end technical strategy for key SDS behaviors and features, spanning perception/prediction integration, planning, controls, and system-level interactions
- Balance hands-on technical work with cross-team leadership: you will still design and implement critical components in modern C++, while also guiding other senior and mid-level engineers to deliver at scale
- Collaborate closely with experts in perception, tracking, prediction, state estimation, localization, mapping, planning, controls, systems engineering, and safety to deliver robust, fail-operational behaviors for Super Cruise and future products
- Define technical vision & architecture
- Set the technical direction for SDS software components with a focus on correctness, robustness, and predictable runtime behavior under tight latency and compute budgets
- Architect scalable, modular multi-sensor perception pipelines for camera, radar, and lidar, including detection, classification, lane/road feature extraction, freespace/occupancy, and environmental context
- Establish and evolve interfaces and contracts between perception/prediction and upstream/downstream components (state estimation, localization, mapping, planning, controls, autonomy management)
- Lead design and delivery of multi-object tracking systems (e.g., Kalman/extended/unscented filters, IMM, probabilistic data association, track lifecycle management) that provide stable, high-quality tracks under real-world noise and edge cases
- Drive development and integration of short-horizon motion prediction for vehicles, VRUs, and other actors using a mix of analytical models and ML-based forecasting, including uncertainty modeling that enables conservative, fail-operational decisions
- Evaluate trade-offs between traditional computer vision/robotics and ML-based approaches, choosing the right tool for the problem and ensuring solutions are production-ready under latency and compute constraints
- Design and implement critical components in modern C++ (C++17 or later), with careful attention to memory management, concurrency, and real-time behavior; use Python for tooling, data analysis, and ML experimentation
- Set and uphold high standards for software quality: clean, well-documented APIs; rigorous code reviews; automated and regression testing; continuous integration; and rich logging and observability for on-road incidents
- Profile and optimize SDS components to meet strict runtime performance, determinism, and resource-usage requirements, using offline and online evaluation frameworks and metrics to guide iteration, regression detection, and performance tuning
- Work with state estimation, localization, mapping, and autonomy management partners to ensure SDS perception and prediction behavior supports reliable engagement and MRM in response to primary stack failures
- Partner with Safety and Systems Engineering to ensure designs and implementations align with functional safety, redundancy, and MRM requirements for fail-operational, eyes-off features
- Represent SDS perception and prediction in cross-org technical forums, ensuring alignment with broader autonomy, platform, and hardware roadmaps
- Provide technical mentorship to other engineers, from onboarding to growing senior and staff-level talent in perception/tracking/prediction and modern C++
- Lead and facilitate design reviews, incident post-mortems, and cross-team technical deep dives, raising the bar for clarity, robustness, and execution speed
- Help build a healthy engineering culture: pragmatic, data-driven decision-making; strong ownership; and a focus on safety, reliability, and customer experience
Requirements:
- BS, MS, or PhD in Computer Science, Robotics, Electrical/Mechanical Engineering, or a related field; or equivalent practical experience
- 8+ years of professional software engineering experience building production systems in robotics, autonomous vehicles, or other complex real-time/control systems, including significant experience in perception and/or prediction
- Strong proficiency in modern C++ (e.g., C++14/17 or later) in large, multi-contributor codebases; experience using Python for tooling, data analysis, and ML experimentation
- Demonstrated experience leading technical design and delivery of perception, tracking, or prediction systems in real-time environments, including: Multi-sensor fusion across camera, radar, and/or lidar (e.g., object-level fusion, occupancy/freespace fusion, early/late fusion architectures), Classical computer vision and geometric algorithms (feature extraction, multi-view geometry, stereo, SfM, SLAM/visual odometry), Multi-object tracking (Kalman/extended/unscented filters, track-to-track fusion, track lifecycle management), Motion prediction for road users (analytical kinematic models, maneuver-based prediction, or learned trajectory forecasting models)
- Proven track record of delivering reliable, high-quality robotics or autonomous driving software to production, including: Testing strategies (simulation, HIL, scenario-based testing, regression suites), Robust metrics and dashboards for monitoring perception/prediction performance, Performance tuning under strict latency and compute budgets
- Strong communication and collaboration skills, with the ability to: Drive clarity in ambiguous technical spaces, Influence engineers and leaders across ML, systems, platform, hardware, and safety, Document and communicate complex technical concepts to diverse audiences
- Passion for automated driving and robotics, and for building systems that measurably improve safety and driver experience
- Experience building or leading camera/radar/lidar perception and fusion for autonomous driving or advanced driver assistance systems in production
- Deep expertise in tracking and prediction for autonomous vehicles or robotics (e.g., interaction-aware prediction, occupancy forecasting, scene-level prediction)
- Hands-on experience with GPU/accelerator-based ML inference, model deployment, and performance optimization (e.g., TensorRT, ONNX Runtime, custom accelerators)
- Experience with safety-critical software or working closely with functional safety teams on requirements, architectures, safety cases, and validation for fail-operational features
- Background in ROS or similar robotics middleware, and familiarity with real-time or embedded platforms and constraints