Lirio is a technology/software company specializing in behavioral science and AI to enhance health interventions. The Senior AI Development Platform Engineer will design and maintain an AI-augmented software delivery platform, collaborating with various teams to improve developer productivity and ensure compliance within a regulated environment.
Responsibilities:
- Evaluate and recommend AI coding tools (Cursor, Claude Code, GitHub Copilot, Codex CLI, and emerging tools) against Lirio's developer workflows, compliance constraints, and codebase characteristics
- Conduct structured evaluations of new models and tools as they launch, testing against real coding tasks in our environment, not just vendor benchmarks
- Maintain the evaluation framework and tooling inventory, ensuring the team uses approved, security-reviewed tools on compliance-sensitive systems
- Design and build the agent orchestration layer: instruction files (.cursor/rules/, AGENTS.md, CLAUDE.md), MCP connectors to Azure DevOps and/or GitHub, context packaging templates, and agent routing configurations
- Enable AI coding agents to execute multi-step software development tasks autonomously (decompose, plan, code, test, validate, and submit PRs) with quality gates at each phase and defined escalation points
- Design agent coordination patterns (planner-coder-reviewer, sub-agent delegation) and workflow state management for complex tasks that span multiple agent steps
- Define human escalation triggers so that when agents encounter ambiguity, scope boundaries, or compliance-sensitive decisions, the workflow surfaces the decision to a human rather than guessing
- Ensure AI coding agents receive the right context for each task type, including project conventions, compliance constraints, coding standards, and relevant codebase context. Manage context window budgets so agents maintain accuracy across large codebases
- Build and maintain work decomposition patterns and templates that structure work items for effective agent execution
- Architect integrations between the AI developer platform and the development ecosystem, including work item tracking, source control, CI/CD pipelines, and code review workflows, forming a coherent, automated delivery chain
- Build rules, instruction files, and CI pipeline checks that flag PHI exposure, tenant isolation concerns, and security issues in AI-generated code before it reaches human review
- Translate HIPAA/HITRUST compliance requirements into automated guardrails, using defense-in-depth controls spanning instruction files, sandbox configurations, CI checks, and human review gates
- Define and maintain permission tiers for AI agent operations (read-only, metadata access, code writes with approval) to maintain BAA compliance
- Ensure AI-assisted delivery produces auditable artifacts, including PR conventions, work item linking, and AI-usage tracking that support compliance evidence collection
- Secure the agent input chain by evaluating and mitigating prompt injection risks from work item descriptions, code comments, PR content, and 3rd party instructions that flow through the harness into agent context
- Systematically test new AI models (Claude Opus, GPT Codex, Gemini Pro, etc.) against Lirio's actual coding tasks to determine when to adopt, swap, or route differently across team workflows
- Maintain model routing guidance: which models are best suited for which task types (complex architecture vs. boilerplate generation vs. test writing vs. code review)
- Monitor model quality across updates and pin versions where stability is critical
- Define which types of code changes AI agents can submit with minimal review vs. which require full human compliance review
- Establish quality gates and agent supervision practices that define what "done" looks like for AI-assisted work before it reaches human review
- Create and maintain branch/PR conventions for AI-assisted work (agent/-, PR templates with work item links, AI-assisted tagging)
- Define work item conventions (readiness criteria, acceptance criteria format, agent status tags) that structure work for both human and AI execution
- When AI-generated code introduces defects, vulnerabilities, or compliance issues, diagnose whether the root cause is in the instruction files, the context packaging, the model routing, or the review process
- Tighten guardrails and adjust workflows based on incident learnings
- Maintain a feedback loop between production issues and the developer platform's safety controls
- Build and maintain observability for the AI developer platform, tracking agent task completion rates, quality gate pass rates, cost per task, guardrail trigger frequency, and model performance trends across the team's workflows
- Use platform telemetry to identify where AI-assisted delivery is producing value vs. where it's creating friction, and feed those insights back into harness design, model routing, and workflow standards
- Provide subject matter expertise on AI-assisted development practices to engineering teams
- Build prototypes, reference integrations, and proof-of-concept solutions to validate platform design decisions and de-risk implementations
- Promote AI-assisted engineering tools and modern development practices consistent with Lirio's engineering culture
- Document platform architecture, workflows, integration guides, and best practices
- Serve as a contributing member of Lirio's Architecture Team, ensuring the AI developer platform maintains architectural coherence with the broader system
- Partner with Product Management and delivery leadership to shape how AI-assisted delivery integrates with planning and execution workflows
- Work closely with Cloud, Security, and DevOps teams to ensure the AI developer platform operates within Lirio's infrastructure and security boundaries
- Participate in the Engineering Council, contributing to engineering standards, patterns, and technical governance as they relate to AI-augmented delivery
Requirements:
- Bachelor's degree in related field
- 5-7 years of related experience
- AI-assisted development fluency: Hands-on experience with AI coding tools (Cursor, GitHub Copilot, Claude Code, Codex CLI, or similar). Not just casual use, but experience building workflows, instruction files, or agent orchestration patterns around them
- Platform engineering or developer productivity background: Experience building internal developer platforms, CI/CD pipelines, developer tooling, or infrastructure that accelerates how engineering teams deliver software
- Compliance in regulated environments: Experience working within HIPAA, HITRUST, SOC 2, or equivalent compliance frameworks. Ability to translate compliance requirements into automated guardrails rather than manual review bottlenecks
- Programming proficiency: Strong skills in Java and/or Python (Lirio's primary stack). Ability to work across codebases, write tooling, and understand the code that AI agents produce
- Agent orchestration and integration: Familiarity with MCP (Model Context Protocol), LLM APIs, instruction file systems, or similar patterns for configuring and constraining AI agent behavior. Experience building integrations between developer tools and enterprise systems (work item tracking, source control, CI/CD)
- CI/CD and DevOps proficiency: Strong experience with CI/CD pipelines, automated testing, code review workflows, and deployment automation. Experience with Azure DevOps (ADO) is a plus; GitHub Actions/Workflows experience is also valuable
- Security awareness: Understanding of secure software delivery practices, including code scanning, dependency management, access controls, and audit trail requirements in the context of AI-generated code
- Bias toward rapid, iterative delivery: This role needs to produce working developer platform capabilities fast, shipping in weeks, learning from real usage, and improving continuously. We're looking for someone who builds momentum by delivering early wins, not someone who designs a complete system on paper before starting
- Experience in healthcare technology or another heavily regulated industry
- Demonstrated ability to lead platform adoption and drive organizational change across engineering teams
- Technical writing ability. This role produces documentation, guides, and standards that the entire team uses
- Familiarity with multi-model AI routing, model benchmarking, or model evaluation frameworks preferred
- Experience with infrastructure as code (Terraform), containerization (Docker, Kubernetes/AKS), and cloud platforms (Azure preferred)