Turing is a leading research accelerator for frontier AI labs based in San Francisco, California. They are seeking a Software Engineering evaluator to create datasets for training large language models and collaborate with researchers to enhance AI-driven coding solutions.
Responsibilities:
- Create cutting-edge datasets for training, benchmarking, and advancing large language models, collaborating closely with researchers
- Curate code examples, provide precise solutions, and make corrections in Python, JavaScript (including ReactJS), C/C++, Java, Rust, and Go
- Evaluate and refine AI-generated code for efficiency, scalability, and reliability
- Work with cross-functional teams to enhance enterprise-level AI-driven coding solutions
- Build agents that can verify the quality of the code and identify error patterns
- Hypothesize on steps in the software engineering cycle (prototyping, architecture design, API design, production implementation, launch, experiments, monitoring, operational maintenance) and evaluate model capabilities on them
- Design verification mechanisms that can automatically verify a solution to a software engineering task
Requirements:
- 3+ years of software engineering experience
- Strong expertise in building full-stack applications and deploying scalable, production-grade software using modern languages and tools
- Deep understanding of software architecture, design, development, debugging, and code quality/review assessment
- Excellent oral and written communication skills for clear, structured evaluation rationales