SambaNova is a pioneering company in the generative AI space, offering a full-stack platform optimized for enterprise and government organizations. The Senior Product Manager will drive execution for SambaRack, ensuring product specifications are met, customer integration challenges are addressed, and the product is well-documented and positioned for market success.
Responsibilities:
- Author clear, rigorous PRDs across SambaRack's core work-streams - inference engine integrations, containerized deployment artifacts, and bare metal and Kubernetes deployment paths
- Translate product definition into engineering-ready specifications with well-defined requirements and acceptance criteria
- Own the release roadmap - sequence work-streams, manage dependencies, and drive delivery against commitments
- Work directly with customers as they integrate SambaRack - understand their integration challenges, surface product gaps, and feed them back into the roadmap
- Partner with field teams to ensure SambaRack is positioned and sold correctly against the ICP
- Own the feedback loop between early customers and engineering
- Own SambaRack's documentation strategy - define what great looks like for a hardware integration product, and hold the team accountable to it
- Define and maintain the supported features list, compatibility matrix, and release notes
- Define the integration experience customers should have and hold engineering accountable to it
- Work with sales and marketing to ensure SambaRack's positioning is crisp
Requirements:
- 5+ years of product management experience in infrastructure, compute platforms, or AI/ML systems
- Execution-oriented: You know how to take a defined product strategy and drive it to shipped software. You write good PRDs, run tight review cycles, and keep engineering moving
- Technical depth: You can hold a credible conversation with engineers about inference engine architecture, Kubernetes deployment patterns, and observability integrations. You don't need to write the code but you need to understand the tradeoffs
- Customer-facing: You have direct experience working with infrastructure or platform engineering teams during technical integrations or evaluations. You understand their operational priorities and can translate their feedback into product requirements
- OSS-aware: You understand what it means to ship and maintain an open source artifact - community support, release cadence, compatibility commitments, and documentation expectations
- Inference infrastructure expertise: Deep familiarity with vLLM and how inference engines integrate with accelerator hardware
- Inference cloud fluency: You understand how inference cloud operators think - they want standard interfaces, minimal operational overhead, and hardware that slots into their existing toolchain
- Documentation as product: You treat documentation as a first-class product deliverable, not an afterthought. You know what good looks like for a hardware integration product