Invoca is the leading AI-native platform that enables teams to understand and act on customer interactions. The Staff Product Manager for the AI Platform will own the product vision and roadmap, focusing on the foundational layer that runs and governs AI agents across channels and products.
Responsibilities:
- Drive Strategy: Own the end-to-end product strategy and roadmap for Invoca's AI Platform, aligning technical foundations with company-level AI principles around reliability, safety, and measurable outcomes
- Define Execution: Establish how AI agents execute in production, including orchestration patterns, tool invocation semantics, context handling, and behavior across asynchronous and real-time environments
- Architect Contracts: Establish clear, durable platform contracts—APIs, configuration schemas, versioned artifacts—that make agent behavior portable, governable, and scalable across products and channels
- Partner for Scale: Collaborate deeply with AI engineering and data teams to translate agentic patterns into reliable systems, making principled tradeoffs across reasoning quality, latency, cost, and operational complexity
- Enable Adoption: Drive standardization through paved roads, golden paths, SDKs, and shared components that reduce bespoke implementations and accelerate internal adoption
- Enforce Quality: Define and enforce platform-level quality, safety, and governance mechanisms, including evaluation workflows, promotion gates, observability, Role-Based Access Control (RBAC), and auditability
- Shape Experience: Shape the developer and product experience for building, testing, debugging, and operating AI agents, ensuring the platform is intuitive, debuggable, and trusted by internal teams
- Guide Evolution: Guide the evolution from an internal platform to a product-grade system, informing customer-facing configuration and studio experiences without compromising guardrails, privacy, or reliability
- Make Build/Buy Decisions: Evaluate build vs. buy decisions for foundational AI infrastructure and guide vendor integrations behind stable, well-defined platform contracts
- Measure Success: Define success metrics for the AI Platform and continuously measure impact through adoption, reuse, quality, performance, and cost efficiency
- Stay Current: Stay current on advancements in agentic AI, orchestration frameworks, and evaluation best practices, translating signal into pragmatic product decisions rather than chasing hype
Requirements:
- 8+ years of product management experience, with meaningful time spent building platforms, infrastructure, or developer-facing products used by other teams
- Background in engineering, computer science, or equivalent hands-on experience building and operating complex software systems
- Strong systems-level understanding of AI-powered and distributed systems, including LLM-based agents, multi-step workflows, retrieval, and tool calling—sufficient to reason about execution models and failure modes without writing production code
- Demonstrated experience defining and scaling platform contracts, including APIs, configuration models, versioning strategies, and lifecycle management
- Proven ability to introduce standards, governance, and quality controls in ways that accelerate product velocity rather than constrain it
- Deep appreciation for evaluation, observability, and regression testing as first-class requirements for AI systems
- Experience operating in enterprise, multi-tenant environments where privacy, security, compliance, and auditability are core design constraints
- Strong instincts for developer experience, including the use of golden paths, tooling, and defaults to drive adoption and reduce fragmentation
- Track record of making hard tradeoffs across latency, cost, quality, and reliability in complex, distributed systems
- Exceptional communication and influence skills, with the ability to align engineering, data science, product, and Go-To-Market (GTM) stakeholders around shared platform goals
- Experience with agentic AI frameworks (e.g., LangChain, LangGraph, ReAct-style systems) or standardized tool invocation models (e.g., MCP)
- Familiarity with real-time systems such as voice, streaming, or low-latency inference
- Background in enterprise SaaS or compliance-heavy domains