Instrument services, build dashboards, and maintain alerts that surface quality degradation, latency issues, and system anomalies in production
Write and maintain automated tests for agent output quality, including deterministic checks and LLM-as-judge evaluations
Build and maintain deployment pipelines with quality validation gates, automating release workflows across multiple environments
Manage and improve cloud resources and maintain infrastructure as code across environments, optimizing for cost, reliability, and scalability
Build and extend evaluation pipelines that assess AI agent conversation quality, accuracy, and safety, collaborating with product and engineering to evolve evaluation criteria
Requirements
Required
Minimum 2 years of professional software engineering experience
Proficiency in Python
Experience with cloud platforms (AWS, Azure, or GCP)
Experience with monitoring and observability tools (Datadog, CloudWatch, Grafana, or similar)
Familiarity with CI/CD pipelines and infrastructure as code
Experience with APIs and distributed systems
Willingness to learn new AI/LLM concepts, frameworks, and technologies
Preferred
Experience writing test frameworks or automated evaluation systems
Exposure to LLMs, prompt engineering, or AI agent systems
Startup experience, or ability to thrive in fast-paced environments
Background in NLP, computer vision, or AI agent systems