Design, build, and iterate on production-grade AI systems leveraging LLMs and Generative AI technologies.
Architect scalable MLOps pipelines covering model training, evaluation, deployment, monitoring, and retraining.
Develop and maintain robust AI integrations via APIs, ensuring reliability, security, and performance at scale.
Stay at the forefront of developments in LLM tooling, prompt engineering, fine-tuning, and retrieval-augmented generation (RAG).
Lead high-quality planning processes that begin with clearly articulated intent — defining what we are trying to achieve and why before determining how.
Define verifiable outcomes for every initiative: establishing measurable success criteria, evaluation benchmarks, and feedback loops to confirm that AI systems are performing as designed.
Translate complex business requirements into well-scoped technical plans, with clear milestones, risk assessments, and decision points.
Champion a culture of thoughtful, evidence-based development — where assumptions are tested, outcomes are tracked, and learnings are fed back into future work.
Work closely with product, engineering, and commercial stakeholders to align AI capabilities with real-world business needs.
Mentor and guide junior developers, raising the overall standard of AI practice within the R&D team.
Requirements
Proven experience as a senior-level developer with a strong focus on AI/ML systems in production environments.
Deep hands-on expertise with LLMs and Generative AI (e.g. OpenAI, Anthropic, open-source models), including prompt engineering, fine-tuning, and evaluation.
Solid experience with MLOps practices and tooling (e.g. MLflow, Weights & Biases, Kubeflow, or similar).
Strong proficiency in building and consuming AI-powered APIs and integrations.
Demonstrated ability to plan and deliver complex projects with clearly defined intent and measurable outcomes.
Excellent communication skills — able to articulate technical decisions and trade-offs to both technical and non-technical audiences.
Proficiency in Python and relevant AI/ML frameworks (e.g. LangChain, LlamaIndex, PyTorch, or similar).
Experience in the hospitality technology sector or other enterprise SaaS environments.
Familiarity with cloud platforms (AWS, Azure, or GCP) and containerisation (Docker, Kubernetes).
Experience with vector databases (e.g. Pinecone, Weaviate, pgvector) and RAG architectures.
Background in applied research or a track record of bridging research and production engineering.