Code for America believes in transforming government services through technology, and they are seeking a Principal AI Application Engineer to bridge policy mandates with technical execution. The role involves building modular AI systems and collaborating with various stakeholders to enhance public service delivery.
Responsibilities:
- Build and ship high-impact experiments (e.g., digital lockers or AI-augmented procurement tools) to expand the "impact possibilities frontier."
- Translate vague policy objectives into robust, working systems
- Identify when to use LLMs, agentic orchestration, or RAG patterns versus simple rules engines or traditional ML models
- Collaborate with policy and domain experts to co-design civic-sector benchmarks, translating complex regulatory requirements into automated evaluation pipelines that rigorously measure model performance
- Identify and define new opportunity spaces by translating emerging policy, technology, and user needs into actionable technical bets
- Architect foundational tools, including declarative task specifications and agentic data layers
- Design infrastructure that respects public sector constraints, focusing on portability and explainability
- Build data layers that interoperate with legacy systems (COBOL, SQL, etc.) to deliver modern value without multi-year migrations
- Establish Code for America as a leader in responsible AI through external thought leadership, including publications, talks, and open-source contributions
- Share demos and earned insights internally to help the organization iterate toward better standards and internal use cases of responsible AI
- Document architectural decisions, successes, and failures to create a blueprint for responsible AI in government
- Drive alignment across engineering, product, policy, and program teams to ensure solutions are technically sound, policy-compliant, and operationally viable
- Partner with and mentor fellow engineers through hands-on code reviews and technical guidance, ensuring the team stays grounded in best practices for responsible AI
- Maintain system health through rigorous, hands-on code reviews and the development of shared utilities
- Ensure craftsmanship and system explainability for the vulnerable populations we serve
Requirements:
- 7+ years of experience in high-ownership environments (former technical founders encouraged); ability to take a vague objective to a finished system
- Hands-on experience building with LLMs, agentic orchestration, and RAG patterns, with the pragmatism to know when not to use them
- Ability to think in 'primitives' and 'capabilities,' preferring modular, reusable frameworks over bespoke scripts
- Passion for bias detection, harm mitigations, and building systems that are explainable to the people they serve
- Mastery of Git, Linux, CI/CD, Infrastructure as Code (Terraform), and container-based workflows
- A critical eye toward the limitations of AI, especially 'black box' logic in high-stakes public services
- Demonstrated ability to influence technical direction and drive alignment across teams without formal authority
- Ability to distill complex architecture into compelling prose for policy-makers or the public
- Prior work in Civic Tech, FinTech, or HealthTech where auditability is a core requirement