Build complex agentic systems featuring multi-agent coordination, long-horizon reasoning, and advanced planning frameworks.
Develop full-scale solutions, including domain-specific enterprise agents and high-performance retrieval pipelines (RAG) spanning various data sources.
Optimize inference performance by bringing to bear GPU-accelerated frameworks and the full NVIDIA AI infrastructure stack.
Build hands-on PoCs and reference architectures that serve as the blueprint for production-grade generative AI pipelines.
Collaborate alongside Enterprise ISVs to integrate NVIDIA software into native platforms, accelerating the deployment of production workloads.
Collaborate with diverse internal teams to improve NVIDIA software through feedback from real-world implementations.
Empower partner engineering teams through technical workshops, deep-dive architecture reviews, and developer enablement.
Scale global expertise by crafting reusable assets and documentation that help field teams deploy agentic AI at scale.
Requirements
BS/MS/PhD in Computer Science, Electrical Engineering, AI/ML, or equivalent experience.
More than 5 years of experience in deep learning, machine learning, or distributed AI systems.
Strong programming and debugging experience in Python, C/C++, and Linux environments.
Background in using deep learning libraries like PyTorch or TensorFlow.
Hands-on experience building LLM and generative AI applications.
Experience working with agentic or multi-agent AI systems employing frameworks such as: 1. LangGraph 2. LlamaIndex 3. CrewAI 4. LangChain 5. OpenAI Agents SDK or similar orchestration frameworks.
Experience building tool-using AI agents that interact with APIs, databases, and enterprise systems.
Ability to rapidly prototype AI applications and build scalable GPU-accelerated architectures.
Excellent interpersonal skills and the ability to collaborate with engineering teams, partners, and executive collaborators.