YO IT Consulting partners with leading AI teams to enhance conversational AI systems. The role involves evaluating AI-generated coding responses for accuracy and clarity, conducting fact-checking, and ensuring adherence to evaluation standards.
Responsibilities:
- Evaluate LLM-generated responses to coding and software engineering queries for accuracy, reasoning, clarity, and completeness
- Conduct fact-checking using trusted public sources and authoritative references
- Conduct accuracy testing by executing code and validating outputs using appropriate tools
- Annotate model responses by identifying strengths, areas of improvement, and factual or conceptual inaccuracies
- Assess code quality, readability, algorithmic soundness, and explanation quality
- Ensure model responses align with expected conversational behavior and system guidelines
- Apply consistent evaluation standards by following clear taxonomies, benchmarks, and detailed evaluation guidelines
Requirements:
- You hold a BS, MS, or PhD in Computer Science or a closely related field
- You have significant real-world experience in software engineering or related technical roles
- You are an expert in at least one relevant programming language (e.g., Python, Java, C++, JavaScript, Go, Rust)
- You are able to solve HackerRank or LeetCode Medium and Hard-level problems independently
- You have experience contributing to well-known open-source projects, including merged pull requests
- You have significant experience using LLMs while coding and understand their strengths and failure modes
- You have strong attention to detail and are comfortable evaluating complex technical reasoning, identifying subtle bugs or logical flaws
- Prior experience with RLHF, model evaluation, or data annotation work
- Track record in competitive programming
- Experience reviewing code in production environments
- Familiarity with multiple programming paradigms or ecosystems
- Experience explaining complex technical concepts to non-expert audiences