10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs and leading global technology platforms. As a Security Engineer, you will be responsible for securing cutting-edge AI systems, identifying vulnerabilities, and building protections into code and workflows while collaborating with researchers to mitigate risks from adversarial actors.
Responsibilities:
- Conduct threat modeling, vulnerability assessments, and red-team style testing of AI-related systems
- Design and implement security controls across infrastructure, applications, and data pipelines
- Build, operate, and maintain detection, monitoring, and incident response capabilities, including investigating incidents and driving remediation
- Collaborate with engineers and researchers to embed security best practices into the design and deployment of AI systems
- Develop automation, tooling, and documentation to improve security operations and reduce manual effort
- Stay current on emerging threats — particularly those related to AI/ML, cloud, and large-scale distributed systems
Requirements:
- Degree in Computer Science, Engineering, Cybersecurity, or a related field — or equivalent professional experience
- 3+ years of hands-on experience in security engineering, application security, detection & response, or penetration testing
- Proficient in pertinent programming/scripting languages (e.g., Python, Go, bash/ powershell, etc.)
- Strong knowledge of secure software development practices, including threat modeling, code review, and DevSecOps principles
- Solid understanding of cloud security concepts (IAM, VPCs, encryption, logging/monitoring) and secure deployment practices
- Familiarity with modern infrastructure, containerization (Docker, Kubernetes), and scalable distributed systems
- Experience effectively communicating complex security topics to technical and non-technical stakeholders
- Hands-on experience securing AI/ML systems, data pipelines, or large-scale distributed environments
- Background in offensive security (exploit development, red teaming, adversary simulation)
- Familiarity with AI-specific risks such as prompt injection, model theft, data poisoning, or adversarial ML