Turing is the world’s leading research accelerator for frontier AI labs and a trusted partner for global enterprises deploying advanced AI systems. As a Software Engineering evaluator, you will create cutting-edge datasets for training and benchmarking large language models, collaborate with researchers, and enhance AI-driven coding solutions.
Responsibilities:
- Create cutting-edge datasets for training, benchmarking, and advancing large language models, collaborating closely with researchers
- Curate code examples, provide precise solutions, and make corrections in Python, JavaScript (including ReactJS), C/C++, Java, Rust, and Go
- Evaluate and refine AI-generated code for efficiency, scalability, and reliability
- Work with cross-functional teams to enhance enterprise-level AI-driven coding solutions
- Build agents that can verify the quality of the code and identify error patterns
- Hypothesize on steps in the software engineering cycle (prototyping, architecture design, API design, production implementation, launch, experiments, monitoring, operational maintenance) and evaluate model capabilities on them
- Design verification mechanisms that can automatically verify a solution to a software engineering task
Requirements:
- 3+ years of software engineering experience
- Strong expertise in building full-stack applications and deploying scalable, production-grade software using modern languages and tools
- Deep understanding of software architecture, design, development, debugging, and code quality/review assessment
- Excellent oral and written communication skills for clear, structured evaluation rationales
- Candidates must be based out of US, Canada or WEU countries (UK, Netherlands, Italy, Germany, …)