ImmunityBio, Inc. is a biotechnology company focused on developing cell and immunotherapy products to enhance the human immune system against cancer. The Senior Security Engineer will oversee the security of AI systems, collaborating with various teams to implement secure practices and address security challenges in AI/ML environments.
Responsibilities:
- Design, implementation, and maintenance of security controls across the full AI/ML lifecycle, including training data validation, model registry policies, deployment guardrails, and production monitoring for anomalous model behavior
- Develop and maintain comprehensive threat models for AI/ML systems, covering prompt injection, data leakage, model evasion and extraction, data poisoning, and agent hijacking scenarios
- Lead red teaming and adversarial testing of LLMs and agentic workflows — including jailbreak attempts, prompt injection, output manipulation, and business logic abuse — and drive remediation with engineering teams
- Partner with ML engineers to embed security into model development pipelines, including secure training, evaluation, and deployment processes, as well as secure use of RAG architecture, tooling integrations, and multi-agent workflows
- Implement and define policies for safe prompt and response handling, including PII and sensitive content detection, output filtering, and usage logging to support investigations and compliance requirements
- Work with security engineering to integrate AI telemetry into SIEM, EDR, and SOC workflows; define and maintain runbooks for AI-related security incidents and forensic investigations
- Lead the creation, modification, and maintenance of AI security documentation, including threat model reports, security specification documents, SOPs, data flow diagrams, and network topology documentation
- Stay current on AI-specific attack techniques, emerging tooling, and relevant frameworks (NIST AI RMF, OWASP LLM Top 10, MITRE ATLAS, secure AI development guidelines) and translate findings into internal standards and controls
- Consult and collaborate with cross-functional SMEs across ML, Product, Platform Engineering, Legal, and Compliance to influence security design decisions and ensure operability and technical feasibility
- Provide technical mentoring and oversight to less experienced security engineers responding to and investigating AI-related security issues
- Create, edit, and adhere to Standard Operating Procedures (SOPs), security playbooks, and standardized documentation templates
- Perform ad-hoc and cross-functional projects assigned to support business needs and provide developmental opportunities
Requirements:
- Bachelor's degree in Computer Science, Information Security, Engineering, or a related field with 7+ years of relevant experience is required
- 5+ years of experience in application security, product security, or offensive security, including hands-on threat modeling and secure design for complex systems, is required
- Practical, demonstrated experience assessing or attacking AI/ML or LLM systems (e.g., prompt injection, model abuse, data exfiltration via LLMs, or adversarial examples) is required
- Excellent interpersonal skills and ability to work effectively in a cross-functional team environment spanning security, ML, and product disciplines
- Excellent technical writing, communication, and organizational skills, with the ability to translate complex security risks into clear trade-offs and actionable requirements for non-security stakeholders
- Strong proficiency in Python and familiarity with modern ML/LLM frameworks (e.g., LangChain, LlamaIndex, Hugging Face, OpenAI API)
- Solid understanding of common web and API security vulnerabilities (OWASP, authentication and authorization, rate limiting, abuse prevention) and how they manifest in AI-powered applications and agents
- Strong knowledge of AI-specific threat frameworks including NIST AI RMF, OWASP LLM Top 10, and MITRE ATLAS
- Strong data analytics skills with experience integrating AI telemetry into security monitoring and detection workflows
- Strong leadership skills with the ability to drive security initiatives independently and mentor junior team members
- Experience working within or alongside regulated industries with compliance obligations (e.g., NIST AI RMF, SOC 2, ISO 27001) is preferred
- Experience with RAG pipelines, vector databases, or agent frameworks and their associated security risks is preferred