Take technical ownership of medium to large-scale web platforms, contributing hands-on while guiding architectural and delivery decisions.
Translate business goals into robust, scalable technical solutions.
Provide technical leadership through mentoring, code reviews, and pairing, helping to raise engineering standards and quality across the team.
Design and evolve front-end and back-end architectures with a focus on performance, accessibility, security, and maintainability.
Proactively identify technical risks, delivery bottlenecks, and systemic issues.
Champion high-quality engineering practices such as automated testing, CI/CD, and observability.
Operate comfortably within Agile, Lean, or hybrid delivery models, adapting processes pragmatically to suit the context.
Requirements
Strong, modern JavaScript and TypeScript expertise, with deep understanding of browser and runtime behavior.
Production experience with a primary front-end framework (React-based preferred, e.g. Next.js), and the ability to evaluate alternatives when appropriate.
Advanced CSS, layout systems, animation, and modern styling approaches.
js and API-driven back-end development, including REST and GraphQL.
Cloud-native development on AWS, Azure, or GCP, including serverless and managed services.
Experience designing and operating microservice or modular architectures.
Strong grounding in web standards, accessibility (WCAG), and internationalisation.
Relational and non-relational data stores, including cloud-hosted databases.
Build tooling, automated testing, and CI/CD pipelines.
Performance profiling, monitoring, and optimisation across the full web stack.
Hands-on experience building LLM-powered web applications in production.
Experience with LLM orchestration frameworks such as Vercel AI SDK, LangChain, LlamaIndex, or equivalent, while implementing agents, tools, and function-calling patterns.
Understanding of prompt design, system instructions, and structured outputs.
Experience monitoring, evaluating, and iterating on LLM behavior (quality, latency, cost).
Familiarity with deploying and scaling LLM-backed services in cloud environments.