Design and execute large-scale evaluation plans for LLM-based agents performing clinical documentation, scheduling, billing, communications, and general workflow automation tasks.
Build end-to-end test harnesses that validate model behavior under different configurations (prompt templates, context sources, tool availability, etc.).
Partner with clinicians to define accurate expected outcomes (gold standard) for performance comparisons in domains of clinical consequence, and partner with other subject matter experts in other non-clinical domains.
Run and replicate experiments across multiple models, parameters, and interaction types to determine optimal configurations.
Deploy and maintain ongoing sampling for post-deployment governance of agent fleets.
Analyze results and summarize tradeoffs in clarity for product and engineering stakeholders, as well as for technical stakeholders among our customers and the broader market.
Take ownership over internal eval tooling and infrastructure, ensuring speed, rigor, and reproducibility.
Identify and recommend candidates for reinforcement fine-tuning or retrieval augmentation based on gaps identified in evals.
Requirements
5+ years of experience in applied machine learning or AI engineering, with a focus on evaluation and benchmarking.
Proficiency with foundation model APIs and experience orchestrating complex agent behaviors via prompts or tools.
Experience designing and running high-throughput evaluation pipelines, ideally including human-in-the-loop or expert-labeled benchmarks.
Superlative Python engineering skills and familiarity with experiment management tools and data engineering toolsets in general including, yes, SQL and database management.
Familiarity with clinical or healthcare data is a strong plus.
Experience with reinforcement fine-tuning, model monitoring, or RLHF is a plus.