Lumen Technologies is the trusted network for AI, transforming how businesses connect, secure, and scale in an AI-driven world. The Principal AI Security Engineer is responsible for designing and advancing Lumen’s enterprise AI security program, ensuring secure deployment of AI technologies while managing risks and vulnerabilities unique to AI/ML environments.
Responsibilities:
- Lead enterprise-wide initiatives to continuously identify, assess, and remediate vulnerabilities across AI/ML systems, including models, data pipelines, and runtime environments
- Oversee the development and execution of vulnerability scanning, penetration testing, and red-teaming exercises specific to AI technologies
- Architect and enforce comprehensive AI security governance structures that align with Responsible AI guidelines, regulatory mandates, and organizational risk appetite
- Develop policies, standards, and approval processes that ensure full lifecycle oversight of AI/ML systems, supporting auditability, transparency, and accountability in all AI operations
- Direct identification, evaluation, and mitigation of risks inherent to AI/ML deployments
- Lead risk assessments, threat modeling, and scenario analysis to anticipate potential weaknesses and prioritize mitigation strategies
- Spearhead proactive threat intelligence, detection, and response initiatives targeting AI/ML environments
- Develop and operationalize advanced monitoring and analytics to uncover emerging threats, adversarial attacks, and anomalous behaviors unique to AI workloads
- Engineer, implement, and maintain layered security controls, reference architectures, and control baselines that address vulnerabilities, enforce governance, and mitigate risks throughout the AI/ML stack
- Integrate automated discovery, authoritative inventory, and lineage tracking solutions to maintain continuous visibility and enforce policy across AI development and production environments, ensuring rapid identification and remediation of security gaps
- Align all AI security capabilities with comprehensive governance frameworks and enterprise security policies, supporting regulatory compliance, audit readiness, and responsible innovation
- Design and build internal tools, automation, and processes that enhance vulnerability detection, threat intelligence, and risk assessment specific to AI/ML environments
- Drive the adoption of emerging technologies, methodologies, and frameworks that strengthen Lumen’s ability to anticipate, detect, and respond to AI-specific security challenges
Requirements:
- Deep understanding of AI/ML systems, including model training, inference pipelines, orchestration frameworks, and agent based systems
- Expertise in AI specific threat classes, including prompt injection, data leakage, data poisoning, model extraction, adversarial inputs, and jailbreak techniques
- Experience securing generative AI, multimodal models, and agentic AI systems in enterprise environments
- Proven experience implementing enterprise scale security solutions spanning cloud, containers, CI/CD, and runtime environments
- Strong grounding in vulnerability management, remediation workflows, and risk based prioritization, including AI specific vulnerabilities
- Ability to translate security architecture and governance requirements into enforceable technical controls
- Experience in application and automation development for security tooling and integration
- 15+ years in cybersecurity, with significant experience in security and advanced threat domains
- Demonstrated leadership as a principal level or enterprise architect, influencing standards and strategy without direct authority
- Strong understanding of Responsible AI, data classification, lineage, and compliance considerations
- Ability to communicate complex AI security concepts to engineering teams, executives, and governance bodies
- Experience operating in large, federated enterprise environments with complex cloud and DevOps ecosystems