BGB Group is a healthcare communications agency known for its excellence and professionalism. They are seeking a hands-on VP of Forward Deployed AI Engineering to lead the deployment of AI automation and agentic systems, working across teams to prototype and operationalize AI capabilities in real workflows.
Responsibilities:
- Forward Deployed Delivery (Prototype to Production)
- Rapidly build client-relevant demos and proofs of concept that reflect real BGB workflows (content generation, modular build, review/MLR support, routing, measurement, etc.)
- Convert successful prototypes into production-ready implementations with security, access controls, auditability, and documentation
- Partner with delivery teams to embed solutions into day-to-day operations, not slideware
- Agentic AI and Workflow Automation Implementation
- Design and deploy agentic workflows (multi-step, tool-using agents; retrieval and reasoning; human-in-the-loop approvals)
- Stand up automation pipelines for repeatable tasks (brief-to-output flows, content QA checks, tagging/classification, summarization, knowledge retrieval)
- Build reusable components: templates, agent patterns, evaluation harnesses, and integration adapters
- Systems Integration and APIs (Make it all work together)
- Write and maintain APIs/services that connect LLM systems to enterprise tools (content repositories, project systems, data stores, analytics, identity/access)
- Implement RAG and knowledge services (indexing, retrieval, permission-aware access, citations, provenance)
- Orchestrate data flows across systems with strong logging, error handling, and versioning
- LLM Ops: Observability, Evaluation, Reliability
- Implement LLM observability (traces, cost, latency, quality) and evaluation (offline and online, regression testing, guardrails)
- Own the operational reality: monitoring, incident response patterns, rollbacks, prompt/model version control
- Establish practical standards for reliability and safety in regulated environments
- Enablement (Technical + Practical)
- Translate needs into technical designs and explain tradeoffs clearly to non-engineers
- Create lightweight playbooks and internal documentation so teams can adopt and extend what you build
- Support proposal technical sections and scoping with grounded, buildable architectures
Requirements:
- Hands-on engineering background (you build systems yourself): strong Python plus one backend stack (TypeScript/Node, .NET, or similar)
- Experience deploying LLM applications in real orgs: prompts, tools/function calling, retrieval, evaluation, and iteration loops
- Experience with LLM orchestration and building multi-step workflows (agents, DAGs, tool use, human approvals)
- Ability to design and implement APIs and integrations across enterprise systems (auth, permissioning, data flow, reliability)
- Practical experience with LLM observability and evaluation (quality metrics, regression tests, traces, cost controls)
- Comfortable in client-facing or stakeholder-heavy environments: can explain what's possible and what's not clearly
- Multi-agent systems in production, or agent frameworks at scale
- Enterprise automation/workflow experience (process automation, knowledge base integration, CRM/ERP or content systems)
- Experience in regulated industries (healthcare/life sciences) including audit trails and compliance constraints
- Azure ecosystem experience (Azure AI Foundry or Azure-hosted LLM stacks), plus identity/permissions patterns
- Contributions to internal developer platforms, reusable tooling, or open-source
- You move fast but don't break trust: pragmatic engineering, strong prioritization, and reliable delivery
- You like ambiguity and can turn it into a working system
- You're energized by bridging demo and deployment and owning the full lifecycle