Design and implement large-scale C++ systems around AI and algorithmic pipelines in a real-time-oriented environment.
Integrate neural network models into production software, including model runtime integration, data flow, pre/post-processing, and system-level validation.
Work closely with deep learning and computer vision algorithm teams to take models from research/prototype stage into robust production systems.
Build and own application-level infrastructure that enables algorithmic and neural network solutions to run efficiently on edge devices.
Optimize runtime performance, memory usage, latency, and throughput in resource-constrained environments.
Contribute to neural network deployment flows, such as ONNX Runtime integration, model conversion, inference execution, profiling, and optimization.
Lead end-to-end development of features — from design and implementation to integration, testing, and deployment.
Take part in building CI/CD processes, automated testing, and development workflows for production algorithm systems.
Debug complex real-time systems involving C++ infrastructure, algorithmic logic, and neural network execution.
Collaborate with multiple teams across Mobileye, gaining deep exposure to both system architecture and state-of-the-art AI algorithms.
Gradually take broader technical ownership, mentor others, and grow into a technical leadership or small team leadership role.
Requirements
B.Sc. in Computer Science, Software Engineering, or equivalent.
5+ years of hands-on C++ development experience.
Strong understanding of object-oriented design, software architecture, and large-scale system development.
Experience working in Linux environments.
Strong motivation to work close to deep learning algorithms and production AI systems.
Interest in neural network deployment on edge devices, including inference runtimes, performance optimization, and model integration.
A proactive, ownership-driven mindset, with interest in growing into a broader technical leadership role.
Experience with performance optimization, memory efficiency, and real-time systems.
Experience with ONNX Runtime, TensorRT, or similar inference runtimes.
Familiarity with CI/CD processes and automated testing.
Experience working closely with algorithm, computer vision, deep learning, or data teams.
Background in computer vision, deep learning systems, or embedded/edge AI deployment.