Develop security tools and frameworks for Bring Your Own Model (BYOM) workflows and perform threat modeling for ML pipelines.
Ensure proactive detection of vulnerabilities and compliance with emerging ML security standards.
Build security scanning tools for ML artifacts and deployment workflows.
Design secure APIs for model integration on embedded platforms.
Perform threat modeling for ML systems (poisoning, evasion, prompt injection).
Implement monitoring solutions for model integrity and anomaly detection.
Ensure compliance with NIST AI Risk Management Framework and similar standards.
Collaborate with internal teams to integrate security checks into development pipelines.
Requirements
Have a background in Computer Science, Cybersecurity, or Cryptography and a strong interest in applied ML, OR Have a background in Machine Learning and an interest in cybersecurity.
Strong Python development for automation and tooling.
Strong experience with threat modeling methodologies adapted for ML systems.
Knowledge of adversarial ML attacks and defenses.
Familiarity with secure API design and integration.
Understanding of compliance frameworks (NIST AI RMF, ISO/IEC AI security standards).