Zoom is a company that helps people stay connected through their collaboration platform. They are seeking a Senior AI Security Assurance Engineer to lead the offensive verification of AI systems and pipelines, ensuring the security, safety, and integrity of AI initiatives across the company.
Responsibilities:
- Leading adversarial verification of AI systems: Design and execute deep, unconstrained assessments of AI models, pipelines, and agents, testing guardrails, safety layers, and data boundaries through offensive experimentation
- Uncovering gaps between promise and practice: Identify where AI security, safety, or privacy controls fail under pressure. Surface the mismatch between claims and reality
- Assessing the full AI lifecycle: Evaluate data, training, and deployment pipelines for risks like model poisoning, prompt injection, or fine-tuning abuse
- Developing AI-powered security discovery systems: Research, prototype, and operationalize machine learning–driven approaches to automatically detect, predict, and prioritize vulnerabilities and behavioral deviations in Zoom’s products and platform
- Automating and scaling offensive operations: Build AI-based frameworks to scale red teaming, vulnerability discovery, and bug bounty triage. Use LLMs, anomaly detection, and pattern learning to enhance automation and coverage
- Adapting cutting-edge research: Integrate the latest findings from offensive security research, autonomous agents, and AI-driven vulnerability analysis into Zoom’s security assurance programs
- Shaping AI security methodologies: Build frameworks for continuous AI-driven adversarial testing, automated validation, and system monitoring that scale across teams and products
- Translating findings into impact: Communicate verified risks and systemic weaknesses clearly to engineering and leadership, pairing technical insight with strategic direction
- Staying ahead of the curve: Track evolving AI architectures, attack vectors, and defenses, turning new research into offensive and defensive capability
Requirements:
- Have deep understanding of generative AI systems (transformers, diffusion models, multi-agent frameworks) and their security failure modes
- Have experience building or adapting novel AI/ML methods to real-world security problems
- Demonstrate proficiency in Python, ML frameworks (PyTorch, TensorFlow, Hugging Face, LangChain), and modern cloud/data environments
- Be skilled at uncovering the true behavior and limitations of AI and platform systems through experimentation, code review, and automated adversarial techniques
- Be skilled at setting direction, advising peers, and communicating high-impact risks to executives
- Be unafraid to challenge assumptions or expose uncomfortable truths in service of user and system safety
- Demonstrate experience in red teaming, exploit development, or vulnerability research