Luma AI is a pioneering company focused on building multimodal AGI systems that understand and generate content across various media. They are seeking a Lead Infrastructure and Reliability Engineer to architect and operate large GPU environments, improve system reliability, and build a strong engineering organization. This role is critical in defining the reliability standards for their cutting-edge AI infrastructure.
Responsibilities:
- Architect and operate large, heterogeneous GPU environments under extreme demand
- Improve utilization and performance where small gains materially change company outcomes
- Resolve failures that span hardware, OS, runtimes, and orchestration
- Eliminate entire classes of instability
- Build mechanisms that make heroics unnecessary
- Define how infrastructure and workloads evolve as cluster size and concurrency grow
- Design scheduling, placement, and resource management approaches for increasingly complex jobs
- Work directly with research to build the systems required for new model capabilities
- Ensure inference platforms scale rapidly without sacrificing reliability or latency
- Anticipate where today’s abstractions will fail and redesign ahead of them
- Hire and develop exceptional systems and reliability engineers
- Set the bar for technical depth, judgment, and production ownership
- Shape architecture early through strong partnerships with research and product
- Translate reliability constraints into long-term platform strategy
Requirements:
- Deep expertise in Linux and distributed systems
- Experience operating GPU / accelerator clusters in real production environments
- Strong fluency in Kubernetes and modern open-source infrastructure
- Comfortable debugging across hardware → kernel → runtime → orchestration
- You understand how systems behave under contention and at scale
- You write code and build automation
- You think in bottlenecks, failure modes, and tradeoffs
- Engineers trust your judgment, especially when things break