Vantage Data Centers powers and connects technology for major hyperscalers and cloud providers. They are seeking a Cybersecurity AI Risk and Governance Engineer to oversee the security, risk management, and governance of AI systems, ensuring they operate securely and in alignment with global policies.
Responsibilities:
- Perform technical security testing and reviews of AI‑enabled applications, agents, and workflows, including architecture, design, and implementation, under established governance and Director direction
- Implement approved security architecture patterns for AI, ML, and LLM systems across cloud, hybrid, on‑prem, and OT‑adjacent environments
- Engineer secure inference paths, APIs, service identities, authentication flows, and segmentation boundaries aligned with least privilege and zero trust principles
- Implement technical safeguards to mitigate prompt injection, unauthorized context expansion, data leakage, hallucination risk, and unsafe output handling
- Configure and maintain controls for limiting, monitoring, logging, and managing AI usage across platforms, models, and agents
- Implement and validate technical controls supporting model explainability, traceability, and output validation where AI impacts operational, workforce, safety, or compliance decisions
- Review and validate LLM usage patterns, including prompt design, retrieval‑augmented generation (RAG), context window constraints, and output handling mechanisms
- Implement controls preventing unauthorized external model training, reuse, or retention of enterprise data by third‑party AI platforms
- Validate encryption, access logging, retention, and deletion controls for data ingested, processed, or generated by AI systems
- Execute AI‑specific threat modeling activities and contribute findings to enterprise and OT cybersecurity risk assessments
- Ensure AI systems produce security telemetry, logs, and audit trails sufficient to detect misuse, drift, policy violations, or anomalous behavior
- Integrate AI security signals into SOC, SIEM, and incident response tooling and workflows
- Support investigation and response to AI‑related incidents, including data exposure, model failure, unsafe outputs, or control breakdowns
- Conduct technical security reviews of vendor‑provided and embedded AI capabilities, assessing model behavior, data handling, and control alignment
- Enforce approved security requirements for AI vendors and prevent activation of AI features without required security validation and governance approval
- Drive alignment with ISO 42001 and related AI governance standards across applicable teams
Requirements:
- Bachelor's degree in Cybersecurity, Computer Science, Data Science, Engineering, or related field, or equivalent experience
- Minimum 5+ years of experience in cybersecurity, security architecture, or risk engineering roles
- Hands‑on experience securing data pipelines, APIs, cloud platforms, and analytics or ML‑enabled systems
- Strong understanding of identity, access management, encryption, logging, and secure system design
- Direct experience securing AI/ML platforms, LLMs, or analytics pipelines
- Experience with cloud security (Azure, AWS, GCP) and SaaS‑based AI platforms
- Familiarity with OT, critical infrastructure, or safety‑critical environments
- Security certifications such as Security+, SecurityAU+, CISM, or cloud security certifications