Crowe is a leading public accounting, consulting, and technology firm that focuses on enhancing services through innovative AI solutions. The AI Security Engineer I (Senior Staff) role involves securing enterprise AI and machine learning systems, conducting advanced security assessments, and collaborating with various teams to protect sensitive data from emerging threats.
Responsibilities:
- Architecting secure deployment and operating models for AI, ML, and generative AI systems across cloud and hybrid environments
- Conducting advanced AI security testing, including adversarial ML attacks, prompt injection simulations, and RAG manipulation assessments
- Identifying and mitigating vulnerabilities in model-serving infrastructure, feature stores, embedding pipelines, and vector databases
- Designing guardrails, safety filters, access controls, and secure interaction patterns for LLM- and RAG-based applications
- Developing automated tooling to detect misconfigurations, insecure endpoints, and data exposure risks within AI pipelines
- Collaborating with cloud and DevOps teams to secure Kubernetes clusters, GPU workloads, and infrastructure-as-code deployments
- Analyzing logs, telemetry, and model outputs to detect anomalies, abuse patterns, model degradation, or malicious activity
- Implementing encryption, secrets management, IAM policies, and network segmentation for AI workloads
- Leading secure design and architecture reviews for AI features, APIs, and platform components
- Documenting threat models, attack surfaces, risk assessments, mitigations, and compliance artifacts
- Participating in AI-specific incident response, investigation, and post-incident analysis
- Evaluating emerging AI security technologies, including model fingerprinting, inference protection, and secure execution environments
- Supporting enterprise adoption of responsible AI, data protection, and regulatory compliance standards
- Mentoring junior engineers, ML engineers, and security practitioners on AI security best practices
- Contributing to cloud security posture management capabilities for AI-enabled platforms
Requirements:
- 4+ years of experience in cybersecurity, cloud security, ML engineering, or DevSecOps roles
- Demonstrated experience securing AI/ML or generative AI systems in production environments
- Strong understanding of ML pipelines, model architectures, and AI system components
- Deep knowledge of adversarial ML attack vectors and mitigation techniques
- Proficiency in Python, security testing tools, and cloud security frameworks
- Ability to assess risk across distributed services, storage systems, inference APIs, and data pipelines
- Strong communication skills and sound technical judgment in security decision-making
- Hands-on experience with Microsoft Azure and M365 security environments
- Willingness to travel occasionally for cross-functional planning and collaboration
- Bachelor's degree in Cybersecurity, Computer Science, Engineering, or a related technical field, or equivalent experience
- Master's degree or advanced training in cybersecurity, AI, or related discipline
- Security and cloud certifications such as SC-100, SC-900, SC-200, SC-300, AZ-500, AI-102, or equivalent AWS certifications
- CISSP, CKS, or CompTIA Cloud certifications
- Advanced experience securing AI platforms on Azure, including Kubernetes security (RBAC, network policies) and multi-tenant GPU workloads
- Experience securing container pipelines using image scanning, signing, and policy enforcement
- Expertise with secrets management solutions (e.g., Azure Key Vault, HashiCorp Vault)
- Experience implementing zero-trust architecture and securing CI/CD pipelines for AI systems
- Deep knowledge of generative AI and RAG security, including prevention of prompt injections, jailbreaks, context poisoning, and embedding leakage
- Experience designing safe-output rendering patterns, guardrails, and red-teaming processes for generative systems
- Familiarity with emerging generative AI defense techniques such as model watermarking, inference integrity checks, and output validation frameworks