Cribl is a rapidly growing company focused on providing innovative data solutions for IT and Security. They are seeking a Staff AI Security Engineer to build the AI Systems team, focusing on designing and implementing security frameworks for AI adoption across internal systems and workflows.
Responsibilities:
- Define, threat model, and operationalize the security architecture for Cribl’s internal AI platform, including standards, controls, approval patterns, and secure-by-design guidance for AI use cases before they scale into production
- Partner with Business Operations to maintain visibility into AI tools, licenses, API tokens, MCP servers, and ad hoc workflows in use across the company, and monitor for ungoverned or high-risk patterns that require remediation
- Own the framework for vetting MCP servers, maintaining an approved registry, defining risk tiers, and enforcing secure connection patterns as MCP adoption expands across teams
- Establish secure patterns for secrets management, non-human identities, scoped credentials, OAuth-based access, and token governance to enforce least-privilege access and reduce credential exposure in AI builds
- Design and deploy guardrails for prompt injection defense, deterministic validation, human-in-the-loop approvals, and additional controls for high-risk workflows that combine sensitive data, untrusted content, and external action
- Partner on building Cribl as the observability backbone for AI systems, including telemetry pipelines, abuse detection, audit trails, threat hunting, and incident response patterns for AI-specific security events
- Partner with Cribl’s Compliance team to drive documentation and control readiness for AI-related obligations and customer scrutiny, including NIST AI RMF, ISO 42001, EU AI Act readiness, AI acceptable use standards, and customer-facing AI governance materials
- Establish the security controls required for AI-assisted internal development, secure coding practices, secrets management, SCA/SAST/DAST expectations, and review patterns for AI-generated code and workflows
- Define and track the metrics that matter most for AI security, including shadow AI exposure, control coverage, incident trends, security review turnaround, and reduction of high-risk patterns as the platform scales across the company
Requirements:
- Staff-level security engineering experience: 7+ years of experience in security engineering, application security, cloud security, identity and access management, detection engineering, or related technical security roles, with a track record of building practical controls that scale
- AI security fluency: Strong hands-on experience with modern LLM and agentic systems, including threat models for prompt injection, tool use, model access, RAG, AI coding tools, and API-driven integrations
- Identity, secrets, and governance depth: Proven experience with OAuth, service identities, secrets management, RBAC / ABAC / scoped permissions, auditability, and secure-by-default architecture patterns
- Security architecture judgment: Experience designing risk-tiered controls, approval models, and protective guardrails that balance innovation with real-world compliance and operational needs
- Detection and incident response mindset: Ability to operationalize telemetry, define actionable detections, investigate security signals, and build pragmatic response paths for new threat surfaces
- Compliance and customer trust orientation: Familiarity with frameworks and customer expectations relevant to enterprise AI governance, including NIST AI RMF, ISO 42001, SOC 2, GDPR, SOX, or adjacent control environments
- Cross-functional communication: Strong written and verbal communication skills, with the ability to simplify risk, controls, and tradeoffs for engineers, business stakeholders, and senior leaders alike
- Builder mentality: You are comfortable creating the first version of the registry, the standards, the playbooks, and the guardrails. Ambiguity energizes you
- Outcome orientation: You care about materially reducing risk while enabling useful AI adoption. You understand that security only works if it is practical enough to be used
- Experience with AI development tools like Claude Code, AWS Bedrock, or similar enterprise AI platforms
- Experience with MCP, skills, API security, gateway technologies, or tool-use architectures for AI agents
- Familiarity with multi-agent workflow design, workflow security patterns, and human-in-the-loop orchestration controls
- Experience with SCA / SAST / DAST, secrets management, SIEM / telemetry pipelines, and secure software delivery controls
- Familiarity with enterprise systems such as Salesforce, NetSuite, Workday, Jira, Confluence, Slack, Google Drive, and Glean, especially where AI workflows introduce differentiated risk
- Experience operating in a high-growth, remote-first B2B SaaS environment
- Comfort partnering closely with Security, IT, GTM Ops, Finance, People, Legal, and Support stakeholders
- Good jokes, or maybe better, bad jokes
- A love for goats