Role Overview
- You will design and execute rigorous benchmarks and define dataset standards.
- Collaborating closely with our R&D team, you will build the evaluation infrastructure that guides the evolution of Pathway’s post-transformer models.
- You Will
- Proactively identify, prioritize, and curate relevant public and client-driven benchmarks across our target use cases and markets.
- Evaluate candidate benchmarks for clarity, data quality, evaluation methodology, and fit with our model roadmap.
- Run benchmarks with baseline models to validate setup, uncover edge cases, and de-risk R&D runs.
- Hand off “benchmark-ready” packages to R&D (specs, data, evaluation scripts, expected metrics, constraints)
- Maintain a shared vocabulary and documentation around benchmarks, datasets, and evaluation formats that GTM and R&D can both use.
- Track and organize benchmark results, model leaderboards, and “what good looks like” for different customers and scenarios.
- Contribute to demos and public-facing proof points based on benchmark outcomes.
- You will play a key role in defining and driving the benchmarking process for AI model evaluation. Your work will directly influence what we build, how we talk about it, and how customers and the market experience BDH.
Requirements
- Cover letter
It's always a pleasure to say hi! If you could leave us 2-3 lines, we'd really appreciate that.
- You are expected to meet at least one of the following criteria:
- You have published at least one paper at NeurIPS, ICLR, or ICML
- where you were the lead author or made significant conceptual & code contributions.
- You have significantly contributed to an LLM training effort which became newsworthy (topped a Hugging Face benchmark, best in class model, etc.), preferably using multiple GPU's.
- You have spent at least 6 months working in a leading Machine Learning research center (e.g. at: Google Brain / Deepmind, Apple, Meta, Anthropic, Nvidia, MILA).
- You were an ICPC World Finalist, or an IOI, IMO, or IPhO medalist in High School.**
- You
- Have experience with ML/LLM evaluation, data science, or technical product roles, ideally around benchmarks or experimentation.
- Are comfortable reading papers, leaderboards, and Github repos, and turning them into clear, repeatable benchmark specs.
- Can talk comfortably with both engineers and customers, and translate between technical detail and business value.
- Care about high-quality data, reproducible experiments, and crisp documentation
- Are respectful of others
- Are fluent in English**
Benefits
- Type of contract: Full-time, permanent
- Preferable joining date: Immediate. The positions are open until filled – please apply immediately.
- Compensation: based on profile and location.
- Location: Remote work. Possibility to work or meet with other team members in one of our offices: Palo Alto, CA; Paris, France or Wroclaw, Poland. Candidates based anywhere in the EU, UK, United States, and Canada will be considered.
If you meet our broad requirements but are missing some experience, don’t hesitate to reach out to us.