Responsible for leading, developing and maintaining the technical IT/cyber security capabilities necessary for safeguarding the firm's information systems.
Part of Truist’s AI Security Engineering function and is responsible for designing, engineering, deploying, and operating security controls for AI, ML, and Generative AI systems across cloud platforms.
Focuses on hands-on engineering, enabling secure AI usage through cloud-native security controls, automation, and infrastructure as code, and operationalizing enterprise AI security standards.
Work closely with AI platform teams, cloud engineering, governance, and risk partners to ensure AI systems are secure by design, compliant by default, and scalable across the enterprise.
Implement AI guardrails and safety controls (e.g., prompt injection defenses, content safety filters, policy enforcement, model access controls).
Support secure AI platform onboarding for internal teams, ensuring alignment with Truist AI Security Standards and Review Processes.
Perform technical security assessments of AI systems and cloud‑hosted AI services.
Implement and integrate cloud security tooling (e.g., Wiz) to provide visibility and control over AI assets.
Partner with AI platform teams, application engineers, cloud security, and governance stakeholders to embed security into AI delivery.
Requirements
Bachelor’s degree and 10 years of experience in systems engineering or an equivalent combination of education and work experience
Strong functional and technical knowledge of information/cyber security capabilities with deep expertise in one or more of the following areas: Encryption, Data Security, Application Security, End Point Security, Identity and Access Management, Windows/Unix/Linux Systems Security, Mainframe Security, Perimeter Security, Network Security, Mobility Security, Cloud Security, Cyber Security, Cryptography, or Authentication Systems
Strong understanding of service lifecycle management, strategic planning, and the cyber security landscape
Strong hands‑on experience with Azure and/or AWS
Infrastructure as Code experience with Terraform and CloudFormation.
Experience building and managing CI/CD pipelines (GitLab).
Experience implementing or operating cloud security tooling (e.g., Microsoft Purview, Sentinel, Wiz or equivalent).
Experience securing AI/ML or Generative AI systems in production environments.
Familiarity with AI‑specific security controls, such as: Prompt injection mitigation, Content safety / moderation controls, Model access and usage restrictions, Secure data handling for AI pipelines
Experience working in regulated environments with strong risk and governance requirements.