Rex.zone is focused on applied engineering work that supports AI/ML training pipelines. The role involves collaborating with global teams to produce high-quality training data and improve model performance across various workflows.
Responsibilities:
- Design and execute annotation and evaluation workflows for NLP, computer vision, and multimodal tasks
- Implement RLHF and prompt evaluation protocols using well-defined rubrics and calibration
- Define and enforce annotation guidelines compliance; maintain edge-case documentation
- Run QA evaluation audits, spot checks, inter-annotator agreement analysis, and error taxonomy tracking
- Curate datasets for named entity recognition, classification, ranking, summarization, and safety policy adherence
- Use Python and data tooling to automate checks, sampling plans, reporting, and quality metrics
Requirements:
- Mid-senior STEM/engineering background (CS, EE, math, physics, statistics, or related)
- Strong Python skills; comfort with structured and unstructured data
- Practical understanding of LLM evaluation, data labeling, and QA evaluation concepts
- Experience with NLP tasks (e.g., NER, text classification, relevance ranking, summarization)
- Ability to interpret guidelines, resolve ambiguity, and communicate asynchronously in a remote setting
- Experience with RLHF pipelines, preference ranking, or rubric-based evaluation
- Familiarity with computer vision annotation (bounding boxes, polygons, segmentation)
- Experience building quality systems (sampling, audits, calibration sessions, disagreement resolution)
- Exposure to content safety labeling or trust & safety workflows