Develop an AI risk assessment framework and codify technical guidelines to mitigate LLM bias, hallucinations, and toxic outputs.
Design and secure autonomous AI agents, including RAG architectures, with deep‑dive threat modeling and adversarial testing.
Establish secure authentication, authorization, and audit protocols for AI communication frameworks (e.g., MCP).
Assess and guide the secure adoption of AI capabilities across enterprise applications, focusing on data security, access controls, and model input/output handling.
Automate security gates to verify model provenance, pipeline integrity, and misconfigurations across the ML and software supply chain.
Integrate security practices throughout the SDLC in close partnership with engineering and DevOps teams.
Promote secure coding standards, tooling, automation, and mentor teams through secure development and pipeline practices.
Design, implement, and maintain security controls within CI/CD platforms (GitHub Actions, Jenkins, GitLab, Azure DevOps).
Ensure software integrity through code signing, artifact validation, provenance, SBOM generation, and dependency scanning.
Automate SAST, DAST, SCA, container, and AI‑specific vulnerability scanning in build and release pipelines to detect insecure LLM orchestration patterns.
Identify and remediate misconfigurations, secrets exposure, and access control gaps in CI/CD and pipeline environments.
Design, deploy, and tune WAF rules and API security protections; conduct API risk assessments and promote secure API design patterns.
Perform secure code reviews and expand automated security testing coverage across pipelines and deployed services.
Triage, prioritize, and track vulnerabilities across source code, pipelines, and production services.
Facilitate threat modeling for applications, APIs, delivery pipelines, and AI‑driven features.
Expand security automation for API discovery, secrets detection, and dependency risk management.
Act as a trusted advisor to product, platform, and DevOps teams by translating technical risk into business impact.
Support the Security Champions program.
Partner with SOC and Incident Response teams during software supply chain, pipeline, or AI‑related security incidents.
Leverage AI‑powered security tools to detect anomalies, code risks, and CI/CD misconfigurations.
Requirements
10–15+ years of experience in Application Security, Product Security, or Secure Software Development.
Hands‑on experience securing LLMs, AI agents, and MCP servers, with familiarity with OWASP Top 10 for LLM Applications and MITRE ATLAS.
Strong hands‑on experience with CI/CD pipelines and source repositories (GitHub, GitLab, Jenkins, etc.).
Knowledge of software supply chain security frameworks (e.g., SLSA, NIST SSDF), including build integrity, artifact signing (Sigstore, Cosign), and secrets management.
Strong background in WAF tuning, API security, and vulnerability identification and remediation.
Proficiency in at least one programming language (Python, Java, Go, or JavaScript/Node.js).
Experience with automated security testing tools, including SAST, DAST, SCA, and container scanning.
Cloud security experience in AWS, Azure, or GCP environments.
Strong understanding of OWASP Top 10 (Web & API), CWE, and secure coding practices.