Conduct comprehensive application security assessments of agentic AI pipelines, tools, and frameworks for leading companies and labs. Examine vulnerabilities in model architectures, guardrails, and deployment infrastructure while developing mitigation strategies.
Develop and share novel prompt injection techniques targeting agentic workflows, including indirect injection via tool outputs, multi-turn manipulation, and cross-agent exploitation. Produce actionable attack libraries and defensive countermeasures for client engagements.
Conduct security assessments of client code bases using a combination of static analysis, dynamic testing, and manual code review, identifying vulnerabilities and developing mitigation strategies, with a focus on findings at the intersection of application security and Agentic AI security.
Conduct threat modeling and risk assessments to proactively identify potential risks for clients and develop mitigation strategies for future prevention, with particular attention to prompt injection attack surfaces in agentic orchestration layers.
Work with leading industry teams to review system code and architecture, and help assure their products through system analysis and modeling.
Develop and contribute to AI regulatory frameworks, establishing assurance methods and auditing processes for mission-critical AI applications while ensuring alignment with emerging industry standards and safety requirements.
Requirements
Demonstrated interest and experience in agentic AI security, with demonstrated ability to identify and mitigate AI-specific vulnerabilities across complex systems, including hands-on experience with prompt injection attacks and defenses.
Deep understanding of AI/ML architectures, frameworks (PyTorch, Jax, LangChain, RAG systems, etc.), and MLOps practices, combined with robust security engineering expertise.
Track record of conducting technical security assessments of software, including software and system hardening, security policy analysis, and implementing effective security measures.
Practical experience designing and executing prompt injection workflows against production LLM systems, agentic pipelines, and tool-use environments, including familiarity with emerging taxonomies and mitigation approaches.
Strong knowledge of multiple programming languages such as Rust, Golang, Kotlin, Swift, Objective-C, JavaScript/TypeScript, Python, Ruby, C and/or C++ for both security analysis and tool development.
A creative and adversarial mindset, with a passion for discovering novel attack vectors and understanding how systems work across many layers of abstraction.
Ability to effectively communicate complex security concepts to diverse stakeholders and deliver clear, actionable recommendations.
Tech Stack
JavaScript
Kotlin
Objective-C
Python
PyTorch
Ruby
Rust
Swift
TypeScript
Go
Benefits
Competitive salary complemented by performance-based bonuses.
Fully company-paid insurance packages, including health, dental, vision, disability, and life.
A solid 401(k) plan with a 5% match of your base salary.
20 days of paid vacation with flexibility for more, adhering to jurisdictional regulations.
4 months of parental leave to cherish the arrival of new family members.
$10,000 in relocation assistance if moving to NYC.
$1,000 Working-from-Home stipend to create a comfortable and productive home office.
Annual $750 Learning & Development stipend for continuous personal and professional growth.
Company-sponsored all-team celebrations, including travel and accommodation, to foster community and recognize achievements.
Philanthropic contribution matching up to $2,000 annually.
Senior Security Engineer at Trail of Bits | JobVerse