Building and operating AI/HPC infrastructure for new and existing customers.
Supporting operational and reliability aspects of large-scale AI clusters, focusing on performance at scale, real-time monitoring, logging, and alerting.
Engaging in and improving the whole lifecycle of services—from inception and design through deployment, operation, and refinement.
Maintaining services once they are live by measuring and monitoring availability, latency, and overall system health with an emphasis on services performance and availability optimisations to meet requirements and SLAs.
Providing feedback to internal teams such as opening bugs, documenting workarounds, and suggesting improvements.
Requirements
BS/MS/PhD or equivalent experience in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, or related fields.
At least 5+ years of professional experience in networking fundamentals, TCP/IP stack, and data center architecture.
Proficiency in configuring, testing, validating, and resolving issues in LAN networks, especially in medium to large-scale HPC/AI environments.
Advanced knowledge of EVPN, BGP, OSPF, VXLAN protocols.
Hands-on experience with network switch/router platforms like Cumulus Linux, SONiC, IOS, JunosOS, and EOS.
Extensive experience delivering automated network provisioning solutions using tools like Ansible, Salt, and Python.
Ability to develop CI/CD pipelines for network operations.
Strong focus on customer needs and satisfaction.
Self-motivated with leadership skills to work collaboratively with customers and internal teams.
Strong written, verbal, and listening skills in English are essential.
Tech Stack
Ansible
iOS
Linux
Python
SaltStack
TCP/IP
Benefits
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer.