Prototype state-of-the-art quantization and sparsity recipes applied to LLM workloads
Design and execute post-training quantization or quantization-aware distillation experiments: prepare SFT/RL calibration datasets, manage checkpoint-level eval sweeps, and iterate on recipes based on results
Run accuracy and verbosity evaluations of quantized/sparsified LLM workloads at cluster scale
Develop data analysis tooling and visualizations for numerics debugging
Participate in code reviews and incorporate feedback
Contribute improvements upstream to open-source inference and optimization libraries; publish findings at ML conferences where appropriate
Requirements
Proficient in Python and PyTorch
Experience with quantization, sparsity, or other model compression techniques
Ability to design and run rigorous experiments: controlled ablations, statistical significance, reproducibility
Familiarity with LLM evaluation methodology (benchmarks, human-preference proxies, verbosity metrics)
MS/PhD in Computer Science, Computer Engineering, Machine Learning, or equivalent experience
3+ years of experience in an applied ML role
Demonstrated ability to move fast with ambiguous requirements, with strong written and verbal communication