InvoiceCloud is a fast-growing fintech leader recognized with multiple awards for its innovative approach in the finance and FinTech sectors. They are seeking a highly skilled AI Security Engineer to support their Cybersecurity, Engineering, and Data Science organizations by ensuring the security and compliance of AI systems while driving the company's AI-first strategy.
Responsibilities:
- Leads AI Security Architecture & Secure Design initiatives by designing and implementing lifecycle security controls across data ingestion, training, evaluation, deployment, and monitoring environments to measurably reduce AI-specific risk while maintaining product velocity
- Conducts structured Threat Modeling & Risk Assessment exercises for generative AI, RAG, and agent-based systems, evaluating risks such as prompt injection, data poisoning, model extraction, model inversion, abuse/misuse, and data leakage, and mapping findings to OWASP Top 10 for LLM Applications, MITRE ATLAS, and NIST AI RMF to drive remediation through engineering teams
- Defines and operationalizes Monitoring, Detection & Incident Response capabilities for AI systems by implementing prompt and output telemetry, tool-call logging, anomaly detection, and AI-specific incident response playbooks integrated into SIEM/SOC workflows
- Delivers measurable outcomes aligned to 30-, 150-, and 210-day milestones, including secure reference architectures, hardened AI environments, integrated security controls, and executive-ready reporting on AI risk reduction and posture maturity
- Establishes and formalizes AI Governance, Privacy & Third-Party Risk requirements by defining security expectations for AI use cases, third-party models, vendor integrations, and sensitive data usage, embedding controls into SDLC, procurement, and engineering standards
- Drives Cross-Functional Collaboration & Enablement by partnering with Engineering, Data Science, DevSecOps, Product, Legal/Privacy, and SOC teams to align on risk appetite, escalation paths, and secure design guardrails while raising AI security maturity across the organization
- Inventories current and planned AI/ML initiatives, documents system architectures and sensitive-data touchpoints, and implements a structured AI security intake and risk-rating process that ensures accountability and transparency
- Develops and communicates forward-looking 6- and 12-month AI security maturation plans that align technical priorities with business goals and clearly articulate risk trends, metrics, and investment needs to Security leadership and the CISO
- Integrates Secure MLOps / MLSecOps controls into AI delivery pipelines, including secure model registries, artifact signing and provenance validation, dependency scanning, secrets management, CI/CD guardrails, and hardened training and inference environments across AWS and Azure
- Builds and scales AI Security Testing & Red Teaming workflows by creating repeatable adversarial evaluation plans for jailbreaks, model evasion, prompt injection, and data exfiltration scenarios, ensuring security controls remain effective over time
- Develops automated regression test harnesses to continuously validate AI security protections as models, prompts, and dependencies evolve, reducing manual effort and improving coverage
- Establishes a sustainable AI security operating rhythm that includes intake reviews, threat modeling checkpoints, remediation tracking, and structured monitoring ownership to bring consistency and order to AI risk management
- Advances AI Security Testing & Red Teaming capabilities through adversarial experimentation and multi-dimensional analysis, proactively identifying emerging AI threat patterns before production impact
- Leverages AI and automation to strengthen testing coverage, automate regression validation, enhance anomaly detection logic, and improve the scalability of AI security monitoring and response
- Continuously evaluates emerging AI security research, tooling advancements, and regulatory developments, translating insights into adaptive defensive controls that support InvoiceCloud’s AI-first strategy while enabling responsible innovation
Requirements:
- Bachelor's degree in Computer Science, Cybersecurity, Engineering, Data Science, or related field (or equivalent practical experience)
- 5+ years of experience in security engineering, application/product security, cloud security, or DevSecOps
- 2+ years of experience building or securing AI/ML systems (including LLM-based applications) in production environments
- Strong understanding of AI/ML threats and defenses, including prompt injection, data poisoning, model extraction, model inversion, adversarial inputs, data leakage, and abuse/misuse scenarios
- Experience integrating security into CI/CD and MLOps pipelines
- Proficiency with cloud platforms (AWS and Azure), container security, IAM, network segmentation, key management, and secrets management
- Familiarity with industry guidance such as OWASP GenAI/Top 10 for LLM Applications, MITRE ATLAS, and/or NIST AI RMF
- Relevant certifications such as CISSP, CSSLP, CCSP, Azure Security certifications, or GIAC certifications