a { text-decoration: none; color: #464feb; } tr th, tr td { border: 1px solid #e6e6e6; } tr th { background-color: #f5f5f5; } Senior AI Software Engineer Generative AI (AWS Bedrock)
Location: Reston, VA (Hybrid / Onsite as applicable)
About the Role We are seeking a Senior AI Software Engineer to design and build production grade AI applications powered by large language models (LLMs) and foundation models. In this role, you will develop scalable, commercial solutions using AWS Bedrock and other AI model providers, integrating advanced AI capabilities into real world products and platforms.
You will work at the intersection of AI engineering, cloud architecture, and product development, building intelligent systems that leverage multiple foundation models for reasoning, automation, and data driven insights.
This is a hands on engineering role for individuals passionate about transforming cutting edge AI technology into robust, reliable, and scalable applications.
What You'll Do AI & LLM Application Development - Design and develop production-ready AI applications using models available through AWS Bedrock and external providers
- Build and deploy:
- Retrieval-Augmented Generation (RAG) systems
- AI copilots and conversational agents
- AI-driven analytics and insights platforms
- Automated workflows and agent-based systems
- Document intelligence and document AI pipelines
Model & Platform Experience - Work with foundation models and APIs, including:
- AWS Bedrock (Anthropic Claude, Amazon Titan, Meta Llama, Cohere)
- OpenAI
- Open-source LLMs
- Implement multi-model routing and orchestration strategies
Backend & Cloud Engineering - Develop APIs and microservices using:
- Python
- Java, TypeScript, or Go (any one or more)
- Build cloud-native architectures leveraging:
- AWS S3, API Gateway, Lambda, Step Functions
- ECS / EKS, Docker, Kubernetes
- FastAPI
- Terraform (Infrastructure as Code)
Data, Embeddings & Search - Create scalable pipelines for:
- Document ingestion and preprocessing
- Embedding generation
- Knowledge indexing and semantic search
- Model evaluation and benchmarking
- Work with vector databases including:
- PostgreSQL (pgvector)
- OpenSearch
- FAISS
- S3-based vector stores
AI Operations, Evaluation & Observability - Implement LLM monitoring and observability including:
- Tracing and logging
- Hallucination detection
- Cost monitoring and optimization
- Prompt performance analysis
- Model drift detection
- Design evaluation frameworks and automated testing pipelines
- Apply AI safety best practices, guardrails, and prompt management
Advanced AI Capabilities - Build tool-using LLMs and AI agents
- Experience with orchestration frameworks:
- LangChain
- LangGraph
- LlamaIndex
- Optimize inference latency, reliability, and cost
- Deploy models using Docker and Kubernetes
- Experience with:
- MCP / FastMCP
- Reinforcement Learning (a plus)
- AWS SageMaker and MLOps workflows
What We're Looking For The ideal candidate:
- Thinks like a systems architect and builder
- Has delivered AI-driven products to production
- Understands tradeoffs between models and architectures
- Can design robust, scalable AI pipelines
- Is comfortable working in a rapidly evolving AI landscape
- Values strong engineering fundamentals and pragmatic decision-making
Example Problems You'll Work On - Build an enterprise knowledge assistant using AWS Bedrock
- Design AI agents that automate complex business workflows
- Develop document intelligence pipelines for financial or operational data
- Architect multi-model orchestration frameworks
- Optimize LLM cost, latency, and reliability at scale