Snoonu is Qatar’s homegrown Super App, reinventing daily life with blazing-fast delivery and shopping solutions. They are seeking an AI/Autonomous Systems Engineer to develop and implement algorithms for autonomous perception, localization, and navigation for ground robots and aerial drones, while collaborating with cross-functional teams to ensure seamless integration of perception outputs.
Responsibilities:
- Develop and implement autonomous perception, localisation, and navigation algorithms for ground robots and aerial drones within Snoonu’s AI-driven autonomous logistics platform
- Design and optimize multi-sensor perception pipelines using LiDAR, radar, RGB/depth cameras, IMU, and GPS to ensure robust operation in Qatar’s challenging conditions (heat, dust, glare, reflective surfaces)
- Develop and improve object detection, free-space segmentation, tracking, and obstacle prediction modules to support safe real-world navigation
- Implement localisation and mapping capabilities, including Visual-Inertial SLAM, sensor fusion-based localization, and GPS-assisted fallback mechanisms
- Build simulation-based testing workflows to validate autonomy performance under realistic Qatari environments, traffic patterns, and operational constraints
- Collaborate with robotics, embedded, and cloud teams to ensure seamless integration of perception outputs into motion planning, control, fleet management, and RaaS orchestration layers
- Evaluate model performance using defined validation metrics (accuracy, latency, robustness), and contribute to iterative improvements to achieve TRL advancement targets
- Support dataset collection, labeling strategies, and pipeline improvements to enable continuous model training and validation from real-world deployments
- Participate in on-site experiments, field testing, and pilot demonstrations, providing debugging and performance tuning of autonomy modules
- Document model designs, experimental results, and validation procedures to support R&D reporting, regulatory compliance, and knowledge transfer activities
Requirements:
- Bachelor's or Master's degree in Artificial Intelligence, Robotics, Computer Vision, Computer Engineering, or a related field
- 2–4 years of backend development experience, with strong hands-on Python/C++ expertise
- Strong analytical and problem-solving skills with the ability to work on complex real-world robotics challenges
- Research-oriented mindset and ability to translate experimentation into production-ready autonomy improvements
- Ability to work effectively in cross-functional teams (robotics, embedded, platform/software, operations)
- Clear communication skills and ability to document technical work, trade-offs, and validation outcomes
- High ownership and accountability for results, timelines, and engineering quality
- Adaptability to fast-paced R&D environments involving prototyping, testing, and iterative development
- Strong business context understanding, able to translate operational needs into technical solutions
- Open to feedback and proactive in applying improvements suggested by senior engineers or tech leads
- Strong foundation in Python, including OOP principles, design patterns, and writing clean, maintainable code
- Experience building backend services using frameworks such as FastAPI, Flask, or Django
- Ability to design, develop, and maintain RESTful APIs with proper error handling and logging
- Experience using AWS services, such as Lambda, SQS/SNS, API Gateway, Step Functions, DynamoDB, RDS, S3, and CloudWatch
- Experience designing event-driven and serverless architectures
- Familiarity with IAM, environment configuration, and cloud security best practices
- Strong proficiency in Python and experience using deep learning frameworks
- Experience in one or more autonomy domains: Computer Vision and Perception Pipelines, Sensor fusion, Localisation and mapping (SLAM / Visual-Inertial Odometry), Object detection / tracking / segmentation, Motion planning support systems
- Familiarity with robotics development environments such as ROS / ROS2 and real-time robotics data pipelines
- Experience working with LiDAR point clouds, camera streams, and IMU-based motion data
- Ability to optimize ML/AI systems for real-time performance and edge deployment constraints
- PhD is a plus for research-heavy autonomy work
- Experience with simulation tools or autonomy test environments is a strong plus (e.g., Gazebo, CARLA, AirSim, Isaac Sim)
- Knowledge of safe autonomy principles, anomaly detection, or human-in-the-loop safety mechanisms is a plus