McGraw Hill is dedicated to delivering digital learning experiences that transform education for learners and educators. The Cybersecurity Engineer – AI & DLP is responsible for designing and implementing data protection and governance controls across enterprise AI platforms, focusing on preventing data leaks and ensuring compliance with security and privacy standards.
Responsibilities:
- Define and implement AI security controls, such as prompt filtering, response inspection, redaction, and usage monitoring, to ensure enterprise AI tools operate within approved data protection and compliance boundaries
- Evaluate inputs and outputs of enterprise AI tools (e.g., ChatGPT, Claude, and internal LLM platforms) to identify risks related to sensitive data exposure, prompt injection, and intellectual property leakage
- Design and implement technical guardrails and monitoring controls—including prompt inspection, output filtering, and DLP policies—to ensure AI usage aligns with enterprise security and data governance standards
- Design, implement, and operate Data Loss Prevention (DLP) controls to prevent the exposure of sensitive data across enterprise AI platforms and generative AI tools
- Partner with engineering, AI/data science, and Digital Workspace teams to integrate security controls into AI platforms, including prompt monitoring, data classification, and access controls
- Evaluate emerging AI tools, models, and AI-assisted development platforms to identify cybersecurity risks and recommend appropriate security requirements and mitigations
- Implement logging, monitoring, and alerting capabilities to provide visibility into how enterprise data is accessed, processed, and shared through AI systems
- Develop and enforce policies and technical controls that prevent the use of sensitive data (e.g., PII, credentials, proprietary content) within AI prompts, training datasets, or integrations
- Design and implement a Data Loss Prevention (DLP) strategy throughout all MH infrastructure systems (MS Purview, Zscaler, cloud environments)
- Operationalize the alert and triage standard operating procedures to protect sensitive emails, uploads, and other avenues of data loss
- Support the design of secure architecture for enterprise AI platforms, including controls for data handling, model access, API usage, and third-party integrations
- Contribute to security awareness and guidance for developers and employees on safe and responsible use of generative AI tools
Requirements:
- 15+ years of applicable experience
- Bachelor's degree in computer science, Engineering, or related field
- Strong communication skills and comfortability working directly with business stakeholders, vendors, and leadership
- Ability to present risks and recommendations to leadership
- Ability to translate complex identity concepts into business value
- Understanding the Model Context Protocol (MCP), Retrieval-Augmented Generation (RAG), and API integrations
- Strong knowledge of DLP technical controls, concepts, and end user computing behaviors
- Experience with administration of the Microsoft tool suite, particularly M365 Copilot, GitHub Copilot, Microsoft Purview
- In-depth knowledge of agentic AI usage and guardrails from an end user and development perspective
- Knowledge of infrastructure and engineering of client/server compute systems