YipitData is the leading market research and analytics firm for the disruptive economy, seeking an AI Security Engineer to lead the implementation and monitoring of security controls for AI systems. This role will bridge multiple teams to ensure AI systems are functional, trustworthy, and compliant with governance requirements.
Responsibilities:
- Own AI behavior monitoring: Define what trustworthy and untrustworthy AI behavior looks like, and ensure it is measurable in production
- Own AI observability standards: Establish telemetry, tracing, logging, and alerting requirements for AI systems and agentic workflows
- Own control validation for agentic systems: Verify that guardrails, policy checks, access boundaries, and execution constraints are functioning as intended
- Own AI security event analysis: Detect, investigate, and document suspicious, unsafe, or non-compliant AI behaviors and coordinate response
- Own implementation support for governance frameworks: Translate governance principles into technical and operational requirements that product and platform teams can adopt
- Own AI trust metrics and reporting: Define KPIs, KRIs, and dashboards that show leadership whether AI systems are operating within approved trust and security boundaries
- Own continuous improvement of AI controls: Use incidents, testing, behavioral findings, and stakeholder feedback to strengthen control design and reduce residual risk over time
Requirements:
- 5+ years of experience in one or more of the following: security engineering, detection engineering, observability engineering, site reliability engineering, application security, ML platform engineering, or AI governance implementation
- Experience designing monitoring, logging, telemetry, or detection strategies for distributed systems, cloud services, or data-intensive applications
- Familiarity with AI/ML system architecture, including large language models, retrieval-augmented generation, inference pipelines, model APIs, and agentic workflows
- Experience translating governance, risk, or policy requirements into operational controls and measurable technical requirements
- Strong understanding of security concepts such as identity and access management, least privilege, data protection, abuse prevention, auditability, and incident response
- Experience investigating system behavior, identifying anomalies, and working cross-functionally to drive remediation
- Hold industry certifications (or equivalent experience): CISSP, CCSP, GIAC Machine Learning Engineer (GMLE)
- Strong written communication skills, including ability to write standards, control definitions, runbooks, and leadership-facing summaries
- Experience with AI observability tooling, tracing frameworks, or telemetry pipelines for LLM or agent-based systems
- Experience implementing controls for AI safety, AI red teaming, prompt security, model misuse detection, or secure tool execution
- Familiarity with Microsoft security, compliance, and AI governance ecosystems
- Familiarity with trust and safety concepts for generative AI and autonomous systems
- Experience supporting internal governance, risk, privacy, or compliance review processes for AI-enabled products
- Experience building dashboards, alerts, and behavioral analytics for security or operational monitoring
- Experience working in highly regulated or audit-sensitive environments