Plative is looking for an AI Architect / Lead Engineer to design and deliver agentic, LLM-powered systems for their clients. This hands-on role focuses on generative AI and agentic workflows, requiring expertise in technical design and implementation of GenAI solutions.
Responsibilities:
- Lead technical discovery with clients to identify high-value GenAI and agent use cases tied to concrete business outcomes
- Translate fuzzy ideas into clear solution designs, user journeys, and MVP scopes that can be quickly validated
- Design end-to-end architectures for GenAI applications: frontend, backend/APIs, orchestration, LLM providers, vector databases, and integrations with enterprise systems and SaaS platforms
- Build and maintain LLM-powered services: conversational copilots, workflow agents, embedded assistants, and task-specific bots for internal and external users
- Design, implement, and operate agentic systems: planner/executor patterns, tool-using agents, and (where appropriate) multi-agent patterns for complex workflows
- Integrate agents with real tools and systems (REST/GraphQL APIs, internal microservices, workflow engines, data platforms), including authentication, authorization, and auditing
- Establish standards for prompt and system design, tool schemas, safety guardrails, observability, and reliability for GenAI and agentic solutions across projects
- Mentor engineers and consultants, review designs and code, and drive best practices and shared patterns across multiple client engagements
- Create reusable reference architectures, templates, and frameworks that accelerate future GenAI and agent projects
- Contribute to thought leadership via internal enablement and external content (talks, blog posts, OSS) when appropriate
Requirements:
- 6–10+ years of professional experience as a software engineer, backend engineer, or solutions/enterprise architect
- Proven track record shipping production-grade backend systems and APIs (not just prototypes or research notebooks)
- Strong programming skills in at least one major backend language (e.g., Python, TypeScript/Node, Java/Scala), with solid engineering practices (testing, code review, CI/CD, version control)
- Demonstrated experience with agentic engineering practices — i.e., AI-native development workflows such as using LLM-powered coding assistants, AI-driven code generation, and prompt-driven prototyping as core parts of the software development lifecycle
- Significant experience with at least one major cloud provider (AWS, Azure, or GCP), including designing and operating services using containers and/or serverless, logging, metrics, and alerting
- Hands-on experience building applications on top of hosted LLMs (e.g., OpenAI, Azure OpenAI, Anthropic, AWS Bedrock, Gemini, or open-source models via hosted platforms)
- Strong prompt and system message design skills for chats, copilots, and task automation, including iterative refinement and evaluation
- Familiarity with embeddings and vector databases (e.g., Pinecone, Weaviate, pgvector, Redis, OpenSearch) and retrieval-augmented generation (RAG) patterns: chunking strategies, metadata, and relevance evaluation
- Understanding of GenAI-specific evaluation concerns: hallucinations, safety controls, relevance, and UX patterns for user control and correction
- Prior experience building agentic systems, including: Planner/executor patterns and multi-step reasoning flows
- Tool-using agents that call external APIs, services, and workflows
- Ability to design robust tools: clear schemas, input/output contracts, validation, rate-limiting, and guardrails for safe execution
- Strong focus on reliability in agent workflows: idempotency, retries, fallbacks, circuit breakers, timeouts, and safe failure modes
- Experience implementing observability for agents: logging of tool calls and reasoning traces, metrics, dashboards, and debugging workflows
- Strong communication skills with both technical and non-technical stakeholders; able to explain complex AI and architecture decisions in clear, accessible language
- Comfort leading client workshops, running demos, and defending technical approaches with executives, product teams, and engineering teams
- Ability to own a problem from discovery through implementation, balancing long-term architecture quality with the realities of client timelines and budgets
- Collaborative mindset and willingness to mentor and uplevel other engineers and consultants on GenAI and agentic patterns
- Public artifacts that demonstrate your work with GenAI and agents, such as: Open-source repositories (libraries, frameworks, or example applications involving LLMs/agents)
- Technical blog posts, talks, or walkthroughs explaining your LLM/agent system designs and trade-offs
- Demos (live apps, recordings, or interactive playgrounds) that showcase real agent behavior and integrations
- Ability to walk through these artifacts in detail during interviews: architecture, design choices, failure modes, and what you'd do differently now
- Experience with data and AI platforms such as Databricks and Snowflake, including building or integrating GenAI/agent workflows on top of lakehouse architectures, feature stores, or governed data-sharing layers
- Experience with enterprise SaaS ecosystems (e.g., Salesforce, ServiceNow, Microsoft 365, Google Workspace, ticketing or CRM systems) and embedding copilots/agents into those environments
- Familiarity with security, compliance, and data governance constraints in enterprise contexts (PII handling, audit logs, RBAC, policy enforcement around model and data usage)
- Experience with evaluation frameworks and tooling for GenAI (prompt A/B testing, human-in-the-loop review flows, rubric-based evaluation, offline evaluation harnesses)
- Cloud certifications or AI-focused certifications (AWS, Azure, GCP) and/or prior work in consulting or professional services environments
- Experience with machine learning beyond GenAI: designing and training predictive models (e.g., classification, regression, recommendation, time-series forecasting) and integrating them into production systems as part of larger solutions
- Familiarity with common ML frameworks and tooling (e.g., scikit-learn, XGBoost, TensorFlow/PyTorch, MLflow, SageMaker), and an understanding of how predictive models and LLM/agent systems can complement each other in end-to-end architectures