Horizon3.ai is a fast-growing, remote cybersecurity company focused on helping organizations proactively identify and mitigate exploitable attack vectors. They are seeking a Staff Attack Engineer specializing in AI/LLM security to develop automated attacks for their NodeZero platform, aimed at enhancing offensive capabilities and ensuring the security of AI systems.
Responsibilities:
- Break AI and agentic systems and translate that research into automated, repeatable attack modules for NodeZero
- Design and execute prompt injection and defense evasion attacks, focusing on generalized, reusable patterns
- Conduct tool-use exploitation, abusing LLM agents’ access to code, file systems, APIs, and databases for attacker-realistic outcomes (e.g., context poisoning, RCE, data exfiltration, privilege escalation)
- Target AI infrastructure (model serving, training pipelines, vector databases, GPU/MLOps tooling) with an understanding of real-world enterprise deployments and misconfigurations
- Research and apply model and supply chain attacks (poisoning, training data extraction, adversarial inputs, deployment pipeline abuse)
- Perform threat modeling for agentic systems, mapping trust boundaries and attack surfaces and turning them into concrete attack paths
- Apply a strong productization mindset, turning manual techniques into safe, reliable, and scalable automated tooling
- Build and extend LLM-powered applications (prompting, structured output, agentic workflows)
- Design with production concerns in mind: cost, safety and hallucination guardrails, reliability, and observability
- Design and extend microservices that orchestrate LLM tasks and integrate with NodeZero and related offensive workflows
Requirements:
- Expert-level Python and software engineering skills
- Solid penetration testing fundamentals and understanding of common attack chains
- Familiarity with AI/LLM security frameworks (e.g., OWASP Top 10 for LLMs, MITRE ATLAS)
- Experience in a security product or offensive security team, ideally with shipped offensive capabilities or tooling
- Proven ability to break AI/LLM and agentic systems
- Clear understanding of trust boundaries around AI tools, data sources, and permissions, and how to systematically test and exploit them
- Expert-level ownership – drives high-complexity, high-risk programs and sets strategy, not just execution
- Self-motivated – identifies problems and builds solutions proactively
- Industry obsessed – tracks the fast-moving AI security landscape and can speak to recent developments, new attacks, and where the field is heading
- Experience with other cloud AI services (e.g., Azure OpenAI, GCP Vertex AI)
- Contributions to AI security research (blog posts, conference talks, CVEs, open-source tools)
- Experience with AWS Bedrock and AWS Agent Core
- Familiarity with graph databases (e.g., Neo4j)
- Background in traditional exploit development or vulnerability research
- CTF experience, particularly in AI/ML-focused challenge categories