Rex.zone is a recruiting and talent solutions platform that supports modern AI/ML training workflows across the United States. The role focuses on building scalable annotation and evaluation programs to improve training data quality and model performance in production LLM pipelines.
Responsibilities:
- Own training data quality metrics, error taxonomies, and continuous improvement plans
- Design labeling schemas and write/update annotation guidelines and ambiguity-resolution rules
- Implement QA evaluation programs (review queues, spot checks, gold tasks, inter-annotator agreement)
- Perform RLHF preference labeling and ranking to support alignment and helpfulness
- Run prompt evaluation and regression test sets to track model performance improvement
- Support NLP tasks such as named entity recognition and intent classification
- Support computer vision annotation (e.g., bounding boxes, segmentation) when needed
- Execute content safety labeling for policy-aligned model behavior
- Partner with engineering to integrate tools and datasets into LLM training pipelines
Requirements:
- 3+ years in data operations, ML ops, evaluation, or annotation engineering
- Strong understanding of NLP, LLM evaluation, and human feedback signals (RLHF)
- Experience with QA evaluation methods, sampling strategies, and disagreement analysis
- Familiarity with annotation tools and workflow automation
- Excellent technical writing for guidelines and repeatable processes