Experience with GPU optimization, including memory efficiency, quantization, and mixed precision.
Comfort owning ambiguous, zero-to-one ML systems end-to-end.
A bias toward shipping, learning fast, and improving systems through iteration.
Experience with LLM inference frameworks such as vLLM, TensorRT-LLM, or FasterTransformer.
Contributions to open-source ML or systems libraries.
Background in scientific computing, compilers, or GPU kernels.
Experience with RLHF pipelines (PPO, DPO, ORPO).
Experience training or deploying multimodal or diffusion models.
Experience with large-scale data processing (Apache Arrow, Spark, Ray).
Tech Stack
Apache
PyTorch
Ray
Spark
Benefits
Our organization is very flat and our team is small, highly motivated, and focused on engineering and product excellence. All members are expected to be hands-on and to contribute directly to the company’s mission.