Lockheed Martin is seeking a Platform-focused AI Engineer to drive the development of their internal AI Factory. The role involves building robust back-end services, intuitive front-end interfaces, and seamless API contracts to enable AI/ML practitioners to deploy GenAI and MLOps workloads at scale.
Responsibilities:
- Design, implement, and maintain high-performance RESTful APIs that serve as the backbone for AI model orchestration and resource management
- Develop end-to-end features, from building responsive React/Next.js front-ends to engineering scalable Python/Go back-end services
- Collaborate on the design of AI Platform components, ensuring that internal services are modular, discoverable, and capable of supporting LLM, RAG, and Agentic workflows
- Proactively integrate AI coding assistants (e.g., Open Code, Cline, Roo, Cursor) and custom LLM agents into your daily workflow to automate boilerplate, generate tests, and conduct initial code reviews
- Bridge the gap between low-level Kubernetes infrastructure and high-level user interfaces by creating middleware that abstracts complexity for the end user
- Optimize the data flow between front-end dashboards, back-end metadata stores (PostgreSQL/Redis), and the underlying GPU-accelerated compute clusters
Requirements:
- Bachelor's Degree in Engineering, Computer Science, or related technical discipline
- Proven experience with both Back-End frameworks (FastAPI, Flask, Gin, or Spring Boot) and Front-End frameworks (React, Angular, or Next.js)
- Advanced knowledge of RESTful API design principles, including versioning, authentication (OAuth2/OIDC), and documentation (OpenAPI/Swagger)
- Demonstrated ability to use AI-assisted development tools and agents to increase engineering velocity and maintain high-quality codebases
- Hands-on experience deploying and managing containerized services within Kubernetes
- Must be a US Citizen
- A growing ability to apply strategic thinking to complex platform problems, focusing on long-term scalability and Cloud Native software development practices
- Experience building and optimizing large-scale distributed systems and Platform services that support intensive compute workloads
- Proven results in executing on infrastructure-intensive programs, specifically within Kubernetes and public cloud environments (AWS, GCP, Azure)
- Expert-level familiarity with Helm or Kustomize to manage complex service deployments
- A passionate, can-do attitude with the resourcefulness to navigate an industry that constantly changes
- Deep empathy for teammates and users, with a desire to make their workflows frictionless, efficient, and useful
- Proven experience collaborating with virtual, cross-functional teams to deliver mission-critical software
- Proven experience in operationalizing the ML lifecycle, specifically in building and maintaining automated training and inference pipelines
- Knowledge of Model Registry management, lineage tracking, and the integration of MLOps tools like MLflow, Kubeflow, or Flyte into a unified platform
- Understanding of the end-to-end AI/ML lifecycle, with experience optimizing inference services, vector database integration (Pinecone, Milvus, or Weaviate), and RAG (Retrieval-Augmented Generation) architectures
- Familiarity with designing or deploying agentic workflows and orchestration frameworks (such as LangChain, CrewAI, or AutoGen) to solve non-linear engineering or business problems