Define, build, and refine mechanisms to secure AI systems (including self-hosted models, LLM APIs, agents, MCPs, and other core components of the AI stack) against adversarial behavior of all kinds
Understand technically complex AI systems, identify potential weaknesses in their architecture, and implement improvements
At least 50% of time performing hands-on remediation. Also working closely with peer engineers to drive remediations
Plan and carry out threat modeling activities and realistic threat simulations across our offerings
Conduct cybersecurity evaluations and lead AI security assessments in a cross-functional environment
Develop initiatives that improve our capabilities to effectively evaluate AI systems and enhance the organization's prevention, detection, response, and threat hunting capabilities
Provide guidance and education to developers to help deter and prevent threats
Requirements
Hands-on coding and prompting experience.
Bachelor of Science or Master of Science in Computer Science or a related field, or equivalent experience
Be a technical and process subject matter expert regarding AI security services and attacker tactics, techniques, and procedures
Good understanding of LLMs, AI architecture patterns, machine learning models, and related technologies such as MCP
Understanding of application security principles and secure coding practices
Experience developing and implementing security procedures and policies
Strong problem-solving, project management, leadership, and communication skills
Self-motivated with a willingness to take ownership of tasks
4+ years of industry experience
Tech Stack
Cyber Security
Benefits
Full-time U.S. employees enjoy a comprehensive benefits program including equity, health, dental, vision, retirement, fitness, commuter and dependent care accounts, and more.
Full-time employees outside the U.S. enjoy a comprehensive benefits program tailored to their region of residence.