Veeam Software, the global leader in data resilience, is seeking an AI Engineer to join their Data Cloud Intelligence team. In this role, you will design and manage innovative systems to help customers extract insights from enterprise data, focusing on building AI-powered services and optimizing workflows for performance and relevance.
Responsibilities:
- Build and operate AI-powered services that summarize document content, extract keywords, and generate vector embeddings from backed-up data using cutting-edge services
- Design secure interfaces for AI agents to query backup data via Model Context Protocol (MCP), enforcing tenant-aware RBAC and query constraints
- Own and optimize prompt engineering and evaluation workflows for our LLM-based assistant, Veeam Intelligence
- Help define and improve our AI quality benchmarks—whether for retrieval relevance, summarization accuracy, or agent response reliability
- Collaborate across the stack and across the VDC engineering organization to ensure AI workflows are deeply integrated and consistently performant
- Tune AI workflows for performance, cost, latency, and observability—especially important when working with billions of documents across global regions
Requirements:
- Strong backend engineering skills in Python and modern AI tooling (e.g., OpenAI APIs, LangChain, transformers, Azure ML)
- Experience working with embeddings, LLMs, or AI-based summarization/extraction pipelines in production environments
- Experience with prompt engineering and prompt tuning, including strategies to improve output quality, reduce hallucinations, and support structured responses
- Awareness of jailbreaking techniques and adversarial prompts; ability to think like an attacker to proactively harden LLM-powered workflows
- Exposure to agent architectures (e.g., Model Context Protocol, Agent to Agent)
- Security-first mindset—comfortable building systems with RBAC, data boundary enforcement, and operational safeguards against misuse
- Proactive in staying current with industry trends in LLMs, search, and AI infrastructure, and able to synthesize and share relevant developments with the team
- Practical knowledge of cloud-based infrastructure, preferably in Azure (Event Bus Services, Blob Storage, Kubernetes, CosmosDB, Azure Open AI Service)
- Strong written and verbal communication (English). Ability to collaborate with team members, flesh out solutions to ambiguous problems, and drive consensus
- Familiarity with prompt evaluation frameworks or LLM observability tooling
- Experience working with RBAC-enforced, multi-tenant systems