Bright Vision Technologies is a forward-thinking software development company dedicated to building innovative solutions that help businesses automate and optimize their operations. They are seeking an AI Security Engineer to lead the design and implementation of security controls, threat models, and incident response capabilities specifically tailored to AI and machine learning systems.
Responsibilities:
- Define and implement security controls specifically targeting LLM and AI-powered application risks
- Build threat models for AI systems, including prompt injection, jailbreaks, data exfiltration, and abuse patterns
- Design and deploy guardrails, content filters, and policy enforcement layers around model endpoints
- Implement runtime detection and response capabilities for adversarial prompts and abusive behavior
- Secure training and fine-tuning pipelines, including data provenance, integrity, and access controls
- Design controls for sensitive data handling, retention, and redaction in LLM workflows
- Lead red-team exercises against AI systems and drive remediation of identified weaknesses
- Evaluate and harden third-party AI services and open-source AI components used internally
- Implement identity, authorization, and tenant-isolation patterns for multi-tenant AI services
- Drive supply chain security for ML artifacts including weights, datasets, and inference dependencies
- Collaborate with privacy, legal, and compliance teams to ensure AI systems meet regulatory obligations
- Develop monitoring, logging, and detection strategies tailored to AI workloads
- Lead incident response for AI-specific security events and drive durable improvements
- Stay current with adversarial ML, LLM security research, and emerging regulatory developments
Requirements:
- Bachelor's or Master's degree in Computer Science, Cybersecurity, or a related discipline
- Six or more years of security engineering experience, including significant work on AI or ML systems
- Strong understanding of LLM internals, modern AI architectures, and common failure modes
- Hands-on experience designing security controls for AI-powered applications
- Deep knowledge of application security, identity, and cryptography fundamentals
- Experience with threat modeling and security architecture review processes
- Familiarity with adversarial ML, prompt injection, and model abuse research
- Proficiency in Python and at least one systems language
- Strong understanding of cloud security and modern infrastructure controls
- Excellent written and verbal communication skills
- Publications, talks, or CTF participation in AI security topics
- Experience with red-teaming LLM-based products
- Familiarity with privacy-preserving ML techniques such as differential privacy
- Exposure to regulated industries with strict data handling requirements
- Open-source contributions to AI security tooling