Support the review of architectures, development approaches, model usage patterns, and technical risk areas.
Advise on AI engineering considerations such as latency, reliability, observability, testability, guardrails, data flow, and system integration.
Evaluate and compare modern AI frameworks, agentic libraries, and experimentation harnesses for suitability in mission use cases.
Assist with the development of technical guidance, prototypes, engineering recommendations, and briefings for government stakeholders.
Support test and evaluation activities for AI tools, including performance assessment, qualitative review, and workflow validation.
Collaborate with platform engineers, product leads, mission users, and government leadership to ensure AI efforts remain technically grounded and operationally relevant.
Stay current on rapid changes in the AI ecosystem and provide informed recommendations on emerging tools and practices.
Requirements
Bachelor’s degree in Computer Science, Engineering, Data Science, Mathematics, or related field.
Experience designing, building, integrating, or evaluating AI/ML-enabled applications or software tools.
Familiarity with modern AI application patterns, including LLM-based applications, RAG, prompt engineering, agentic systems, and AI-assisted workflows.
Working knowledge of one or more agentic or orchestration frameworks such as LangChain, LangGraph, Semantic Kernel, AutoGen, CrewAI, Haystack, LlamaIndex, or similar ecosystems.
Experience with software development fundamentals, APIs, data handling, and system integration.
Ability to assess technical tradeoffs and communicate them clearly to both engineers and government stakeholders.