PayNearMe is on a mission to simplify payments through innovative technology. The AI Security Engineer will lead efforts to secure the organization’s data and infrastructure against risks associated with artificial intelligence, focusing on protecting sensitive data and addressing external AI threats.
Responsibilities:
- Develop and implement AI governance and security controls to prevent sensitive data exposure through internal AI tooling (e.g., ChatGPT Enterprise, Copilot, GitHub Copilot, and other LLM-integrated systems)
- Establish policies, guardrails, and data classification enforcement for prompt inputs and outputs across all AI-enabled workflows
- Monitor and assess AI tool usage across departments to identify policy violations, data leakage risks or unapproved AI integrations
- Collaborate with Engineering, IT, and Legal to ensure compliance with SOC2, NIST CSF, PCI-DSS, and data privacy regulations (e.g., GDPR, CCPA) in the context of AI data processing
- Develop and maintain a secure AI development environment for internal data science and ML projects
- Support red-teaming and testing of internal AI models to identify vulnerabilities, prompt injection, data exfiltration or model inversion risks
- Identify and mitigate AI-driven external threats, including model-based phishing, deepfake, data poisoning and automated attack tooling
- Work with the security operations and incident response teams to detect and respond to AI-generated or AI-assisted attacks
- Contribute to the organization’s threat intelligence program with a focus on emerging AI risks
- Evaluate and test AI-specific security solutions, including model monitoring, output filtering and anomaly detection tools
Requirements:
- 5+ years of experience in information security or application security, with 1–2 years in AI/ML or data security domains
- Strong understanding of data governance, DLP and security controls for SaaS and cloud environments (AWS preferred)
- Familiarity with AI model architectures, data pipelines and risks such as prompt injection, data leakage and model exploitation
- Experience implementing security frameworks (e.g., NIST CSF, ISO 27001, SOC 2, or PCI DSS)
- Strong collaboration skills and ability to communicate complex security concepts to non-technical stakeholders
- Experience with AI/ML tooling security (e.g., OpenAI, Anthropic, Azure OpenAI or Google Vertex AI)
- Knowledge of secure model development lifecycle (SDLC for AI) and MLOps practices
- Demonstrated experience building or integrating AI governance and monitoring systems
- Relevant certifications (e.g., CISSP, CCSK, or AI-specific credentials such as ISACA AAISM or CAISP)