Aurelius Systems is a VC backed defense tech startup focused on building autonomous directed energy systems for counter-UAS. The Perception Engineer will work on developing, training, and deploying vision and sensor-based models to enable real-time detection and tracking for laser-based defense systems.
Responsibilities:
- Design, train, validate and fine-tune machine‑learning and deep‑learning models (e.g., YOLO, RT-DETR, CNNs) for object detection, classification, and segmentation
- Integrate and fuse data from multi‑modal sensors (RGB, thermal, LiDAR/ToF, IMU, encoders) to produce robust, real‑time Regions of Interest (ROIs)
- Research, implement, and as-needed develop high and low-level image-processing techniques, such as deconvolution, low SNR detection, and motion-isolation techniques
- Collaborate with hardware teams to integrate and troubleshoot sensors (global‑shutter and rolling‑shutter cameras, thermal imagers, LiDAR/ToF modules, IMUs) over GigE Vision, USB3 Vision, CAN, SPI, and I²C protocols; develop and debug embedded firmware in C/C++ (or Rust) for microcontrollers (STM32, NXP, TI) and FPGAs using VHDL/Verilog within RTOS environments (FreeRTOS, Zephyr)
- Build scalable data ingestion, labeling, augmentation, and storage pipelines (simulated and field data) ensuring 100k+ labeled frames accuracy
- Optimize inference frameworks for edge deployment (GPU/FPGA), achieving ≥500 Hz end‑to‑end throughput
- Develop dashboards and telemetry for drift analysis, hardware health monitoring, performance metrics, and automated retraining triggers
- Author clear technical docs; mentor junior engineers on best practices in vision, sensor‑fusion, and embedded firmware engineering
- Determines development needs by directly analyzing technical and physical limitations of our goals