We’re looking for a software engineer to join our Infrastructure team—building and operating the core systems that power our rapidly growing AI agent platform for customer support.
As part of the AI Infrastructure team, you’ll be responsible for the systems that enable Assist to be fast, reliable, and secure.
You’ll work on foundational platform components that power real-time LLM usage at scale, while also exploring how AI can be leveraged internally to make our engineering team more productive.
This team is highly cross-functional, working closely with the AI, security, and product engineering teams.
This is a high-ownership role for someone who’s excited by 0-to-1 building and shaping the infrastructure backbone of our AI products.
Requirements
Have 6+ years of engineering experience, with past ownership of high-scale, production-critical infrastructure
Have experience with distributed systems and container orchestration (especially Kubernetes)
Have worked with AI/ML platforms or are excited to build foundational infrastructure for LLM-based applications
Thrive in fast-paced environments with shifting requirements and ambiguous problem spaces
Are motivated by impact, enjoy deep technical challenges, and want to work cross-functionally across security, AI, and product
Have strong familiarity with one or more parts of our tech stack:
Cloud provider: AWS
Orchestration: Kubernetes + Karpenter
LLM integration: Experience with OpenAI, Anthropic, or open-source model serving (e.g., vLLM, HuggingFace TGI, Ray Serve)