NVIDIA is a leader in computer graphics and accelerated computing, seeking a skilled HPC/AI Benchmarking and Telemetry Engineer to drive performance insights across advanced computing infrastructure. The role involves developing benchmarking methods and telemetry frameworks to enhance system performance and collaborating with various teams to optimize workloads.
Responsibilities:
- Formulate benchmarking methods for high-performance computing and AI tasks. Perform and bring these methods to completion on large-scale GPU clusters. Assess performance metrics to detect optimization opportunities and upgrade architecture
- Develop and maintain telemetry infrastructure to capture performance data. This data spans host-level GPU/CPU metrics, network fabric utilization, and power/thermal characteristics within the facility
- Collaborate closely with hardware engineering, software development, and customer-facing teams to define performance requirements, fix bottlenecks, and validate configurations against real-world workloads
- Deploy and manage observability stacks including monitoring tools like Prometheus, visualization platforms such as Grafana, NVIDIA's DCGM, and custom telemetry solutions to provide actionable insights into cluster health, utilization, and performance trends
- Work directly with engineering and collaborate with internal partners to understand their performance requirements, conduct on-site benchmarking engagements, and deliver detailed analysis and recommendations for workload optimization
- Maintain extensive knowledge of industry-standard benchmarks in advanced computing and machine learning fields such as HPL, HPCG, MLPerf, and NCCL tests. Contribute to developing new benchmarking methodologies for emerging workloads
Requirements:
- Bachelor's degree in Computer Science, Electrical Engineering, Computer Engineering, or a related field (or equivalent experience)
- 8+ years of direct experience working with HPC and/or AI infrastructure, including cluster deployment, performance analysis, and benchmarking
- Deep expertise in Linux system administration, including kernel tuning, process scheduling, storage I/O optimization, and solving performance issues at scale
- Proven experience crafting and implementing telemetry and monitoring solutions for large-scale distributed systems, with proficiency in tools such as Prometheus, Grafana, DCGM, collectd, or similar observability platforms
- Solid grasp of GPU architectures, CUDA programming principles, and GPU performance traits in high-performance computing and artificial intelligence workloads
- Familiarity with job schedulers (Slurm, PBS, LSF) and container orchestration platforms (Kubernetes, Docker) in HPC/AI environments
- Proficiency in Python, Bash, and other scripting languages for automation, data analysis, and workflow orchestration
- Excellent analytical and problem-solving skills with the ability to interpret complex performance data and communicate findings to both technical and non-technical audiences
- Experience with high-performance networking technologies including InfiniBand, RoCE, and Ethernet fabric tuning and performance analysis
- Knowledge of parallel filesystems such as Lustre, GPFS, BeeGFS, Weka, or VAST, including performance tuning and benchmarking
- Background in power and thermal management for high-density compute environments, including PUE optimization and liquid cooling technologies
- Contributions to open-source benchmarking tools or performance analysis frameworks
- Industry certifications such as RHCE, CKA, or vendor-specific HPC/data center credentials