Turing is the world’s leading research accelerator for frontier AI labs and a trusted partner for global enterprises deploying advanced AI systems. As a Software Engineering evaluator, you will create cutting-edge datasets for training large language models and collaborate with researchers to enhance AI-driven coding solutions.
Responsibilities:
- Working on AI model training initiatives by curating code examples, building solutions, and correcting code in Python, JavaScript (including ReactJS), C/C++, Java, Rust, and Go
- Evaluate and refine AI-generated code to ensure that it is efficient, scalable, and reliable
- Collaborate with cross-functional teams to enhance AI-driven coding solutions against industry performance benchmarks
- Build agents that can verify the quality of the code and identify error patterns
- Hypothesize on steps in the software engineering cycle (prototyping, architecture design, API design, production implementation, launch, experiments, monitoring, operational maintenance) and evaluate model capabilities on them
- Design verification mechanisms that can automatically verify a solution to a software engineering task
Requirements:
- Several years of software engineering experience (+5 years), including 2+ years of continuous full-time experience at a top-tier product company (e.g., Google, Stripe, Amazon, Apple, Meta, Netflix, Microsoft, Datadog, Dropbox, Shopify, PayPal, IBM Research)
- Strong expertise in building full-stack applications and deploying scalable, production-grade software using modern languages and tools
- Deep understanding of software architecture, design, development, debugging, and code quality/review assessment
- Excellent oral and written communication skills for clear, structured evaluation rationales