Microsoft AI is looking for experienced Member of Technical Staff, High Performance Computing Engineers to help build and scale the infrastructure that trains their frontier models and powers the next evolution of their personal AI, Copilot. This role involves designing, operating, and maintaining large-scale HPC environments while collaborating with researchers and engineers to support their workloads and troubleshoot issues.
Responsibilities:
- Design, operate, and maintain large-scale HPC environments, drawing on hands-on engineering experience in production settings
- Own the deployment, configuration, and day-to-day operation of HPC schedulers (e.g., SLURM, Kubernetes), ensuring reliable and efficient job scheduling at scale
- Serve as a technical owner for at least one core HPC domain (GPU compute, high-performance storage, networking, or similar), including ongoing maintenance, performance tuning, and troubleshooting of massive clusters
- Develop and maintain automation and tooling using Bash and/or Python to improve cluster reliability, observability, and operational efficiency
- Partner closely with researchers and engineers to support their workloads, troubleshoot cluster usage issues, and triage failed or underperforming jobs to resolution
- Drive work forward independently by navigating ambiguity and technical roadblocks, delivering incremental improvements that get capabilities into users’ hands quickly
- Enjoy working in a fast-paced, design-driven product development environment, balancing stability with rapid iteration and experimentation
- Embody our Culture and Values
Requirements:
- Bachelor's degree in computer science, or related technical field AND 4+ years technical engineering experience with deploying or operating on-premise or cloud high-performance clusters
- 4+ years experience working with high-scale training clusters (ex. working with frameworks/tools such as nvidia InfiniBand clusters, SLURM, Kubernetes, Ray, etc.)
- 4+ years experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP, OR equivalent experience
- Master's Degree in Computer Science or related technical field AND 6+ years technical engineering experience with deploying or operating on-premise or cloud high-performance clusters
- 6+ years experience working with high-scale training clusters (ex. working with frameworks/tools such as nvidia InfiniBand clusters, SLURM, Kubernetes, Ray, etc.)
- 6+ years experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP, OR equivalent experience
- Experience with LLM training clusters
- Experience working with AI platforms, frameworks, and APIs
- Experience using Machine Learning frameworks, including experience using, deploying, and scaling language learning models, either personally or professionally
- Experience working with large-scale HPC or GPU systems (ex. NVIDIA H100/GB200 or equivalent)
- Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience
- Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security
- Demonstrated interpersonal skills and ability to work closely with cross-functional teams, including product managers, designers, and other engineers
- Ability to clearly communicate complex technical concepts to both technical and non-technical stakeholders
- Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies
- Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines
- Proven ability to collaborate and contribute to a positive, inclusive work environment, fostering knowledge sharing and growth within the team