Build Own services in the live call path — from feature delivery through deployment, monitoring, and incident follow-through.
Write code across the conversational AI stack: speech-to-text, text-to-speech, Large Language Model (LLM) orchestration, and telephony.
Build and maintain observability for your services — metrics, structured logs, distributed tracing, and alerts tied to service-level indicators (SLIs) and reliability targets you help define.
Write Infrastructure as Code using AWS CDK for containerized (ECS) and serverless (Lambda) architectures.
Build fault-tolerant telephony integrations with graceful degradation, retry logic, and error handling.
Deliver real-time conversational voice AI, including low-latency infrastructure.
Monitor and reduce spend across model providers, telephony usage, and AWS workloads without degrading latency or call quality.
Collaborate with AI Engineers on model integration and optimization work.
Requirements
Computer science degree (or equivalent practical experience with data structures, systems, and distributed computing).
3+ years building and operating production backend systems.