Luma is focused on building multimodal AI to enhance human capabilities. The role involves refining and personalizing Luma's foundation models to drive real-world value, bridging the gap between research and production-ready systems.
Responsibilities:
- You will work as a fullstack applied researcher across modeling, data, systems, and evaluation to adapt and deploy models to production
- You will leverage a toolkit spanning SFT, RL, personalization, distillation, control adapters, and more, to develop and maintain model variants purpose-built for user environments and creative partners
- You will leverage proprietary, vertical-specific datasets to create specialized finetunes and improve future training recipes, ensuring our models rely on data that reflects real-world use cases
- You will define and drive end-user quality – setting success metrics, building user-aligned evaluations, and iterating on the model/data/evals loop to meet strict fidelity and reliability targets in specific enterprise verticals
- Partner closely with Product, Research, and Design to translate creative intent and user feedback into model behavior, intuitive controls, and production-ready capabilities for users and partners
Requirements:
- Strong ML fundamentals with deep experience in visual generative models (diffusion/transformers or related architectures)
- Deep understanding of at least one: fine-tuning, personalization, domain adaptation, data curation, targeted distillation, interpretability, or human-feedback-driven refinement
- Strong Python and deep learning engineering skills (ideally PyTorch)
- Experience in adapting and deploying models to production
- Ability to define and drive end-user quality – setting success metrics, building user-aligned evaluations, and iterating on the model/data/evals loop
- Experience in cross-functional collaboration with Product, Research, and Design
- Contributions to state-of-the-art models in image/video generation
- Experience collaborating with creative partners (VFX, animation, film, design tools)
- Track record building workflows/tools that materially improve iteration speed and evaluation rigor
- Familiarity with large-scale training infrastructure and distributed systems (Ray, Slurm, Kubernetes)