Twilio is a company shaping the future of communications with innovative solutions. They are seeking a Staff Engineer, AI Security to lead autonomous defense for the AI lifecycle, build multi-agent frameworks, and ensure a secure environment for agentic workflows and AI innovation.
Responsibilities:
- Serve as the primary subject matter expert for all AI and machine learning security initiatives across security and R&D
- Design and manage AI gateways to provide a centralized control plane for authentication and authorization and rate limiting across all model and tool interactions
- Build and maintain an autonomous security agentic framework that utilizes multi agent orchestration for end to end investigation and alert triage and remediation
- Develop agentic identity models using OAuth 2.1 to propagate identity across trust boundaries and prevent the confused deputy problem
- Help govern the AI augmented software development lifecycle by integrating real time security gates into the developer environment and CI/CD pipeline
- Manage Agentic Security Solutions that secure AI lifecycle and manage AI workloads at runtime
- Author company wide AI security standards and implement these security checks across Twilio’s stack
- Implement human in the loop checkpoints and transactional safety protocols for high impact or destructive agentic actions
- Partner with engineering leadership to set the long term roadmap for identity centric security and automated posture management
- Act as a knowledge multiplier by mentoring security engineers and developing secure by default paved road templates for R&D teams
Requirements:
- 8+ years of experience in security engineering with at least 3 years focused on AI or machine learning security operations (MLSecOps)
- Expertise in orchestrating multi-agent systems with AWS Strands, LangGraph, and CrewAI, specializing in runtime isolation, PII redaction, and defending against indirect prompt injection in agentic environments
- Hands-on experience with AI-specific frameworks (e.g., MITRE ATLAS, MAESTRO, OWASP Top 10 for LLMs/Agents/MCP) to threat model and defend against a wide spectrum of risks, including direct/indirect prompt injection, training data poisoning, tool poisoning, and data exfiltration within agentic workflows
- Proficiency in securing end-to-end AI pipelines, from data ingestion and training to model deployment and monitoring
- Strong communication skills to translate complex AI risks into actionable business logic for stakeholders
- Hands-on experience in modern application security tooling including SAST and SCA and DAST with experience adapting these tools to catch AI specific vulnerabilities like indirect prompt injection
- Expertise in identity standards including OAuth 2.1 and PKCE
- Experience with AI Red Teaming and conducting adversarial simulations against Large Language Models (LLMs) and agentic systems
- Proficiency in at least one general programming language (Python, Go, etc) with experience in container security and workload isolation
- Proven ability to operate with autonomy and drive high impact outcomes in ambiguous environments by identifying and executing on critical projects without predefined roadmaps or direct supervision