Protect dataset quality at scale by running sampling-based audits, enforcing quality gates, and ensuring every dataset release is audit-ready and meets acceptance thresholds.
Execute sampling plans (stratified by label type, difficulty, site/surgeon/device, model confidence) for ongoing QA and release audits.
Audit annotations against guidelines + ontology; score defects using standardized rubrics (severity, type, root cause).
Run and validate quality gates prior to release: completeness/coverage checks schema/constraint checks (illegal combinations, temporal consistency, boundary rules) defect density and rework thresholds.
Generate rework tickets with precise instructions; track closure and verify fixes.
Partner with Annotation Lead to drive calibration actions (training refreshers, guideline clarifications, examples library).
Maintain QA dashboards: defect rate, rework %, gate pass rate, audit cycle time, top error patterns by annotator/team/label type.
Support release readiness: ensure required artifacts are complete for the Evidence Pack (QA results, sampling logs, sign-offs).