Design and develop new scheduling features and add-on services to improve GPU compute clusters across many dimensions, such as resource usage fairness, GPU occupancy, GPU waste, application resilience, application performance and power usage.
Design and develop batch workload management and orchestration services
Provide support to staff and end users to resolve batch scheduler issues
Build and improve our ecosystem around GPU-accelerated computing
Performance analysis and optimizations of deep learning workflows
Develop large scale automation solutions
Root cause analysis and suggest corrective action for problems large and small scales
Finding and fixing problems before they occur
Requirements
Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience
5+ years of work experience
Strong understanding of batch scheduling, preferably with experience in schedulers such as SLURM or K8s batch schedulers (Kueue, Volcano, etc.)
Significant experience in systems programming languages such as C/C++ & Go as well as scripting languages such as Python and bash
Established experience in Linux operating system, environment and tools
Experience analyzing and tuning performance for a variety of AI workloads
In-depth understanding of container technologies like Docker, Singularity, Podman
Flexibility/adaptability for working in a dynamic environment with different frameworks and requirements
Excellent communication, interpersonal and customer collaboration skills