Braintrust is the AI observability platform, providing tools for builders to understand and improve AI behavior in production. The Application Security Engineer role involves reviewing code, building threat models, and leading AI-specific security work to ensure secure design across the platform.
Responsibilities:
- Drive secure design across the platform: lead threat models for new features, review architecture proposals, and partner with product and backend engineers to ship features that are secure by default
- Review code across our TypeScript, Python, and Go services, our open source tracing libraries, and our model proxy — and find the bugs others miss
- Build the paved road: authn/authz primitives, RBAC and tenancy isolation patterns, secret handling, safe data pipelines, and sandboxed code execution for user-supplied JavaScript and Python snippets
- Own our SAST, DAST, SCA, and secret-scanning tooling end-to-end, keeping signal-to-noise high enough that engineers actually fix what you ship
- Run our vulnerability management program and triage external bug bounty reports; close the loop with durable fixes, not point patches
- Lead AI-specific security work: prompt injection defenses, model proxy abuse detection, agent and tool-use sandboxing, data-exfiltration controls in multimodal pipelines, and security for the eval workflows our customers run
- Partner with our open source maintainers on the security of libraries that get embedded inside customer applications
- Use agentic coding workflows to scale yourself: automated code review, exploit prototyping, control validation, and IR triage
Requirements:
- 5+ years in application security, product security, or backend engineering with a security focus — you've shipped real code and reviewed a lot of it
- Strong code reading and writing skills in at least two of TypeScript/Node.js, Python, Go, or Rust
- Deep knowledge of common web and API vulnerability classes and the architectural patterns that prevent them — not just OWASP Top 10 trivia
- Track record of building secure-by-default libraries, frameworks, or services that other engineers actually adopt
- Hands-on experience with authn/authz design, multi-tenant data isolation, and secrets/key management at scale
- Comfortable with the realities of a high-availability data platform: real-time pipelines, ingestion at scale, semi-structured data, Postgres, Redis, AWS
- A clear point of view on AI/LLM security — prompt injection, agent abuse, tool-use sandboxing, model proxy threats — and ideally hands-on experience defending against them
- Daily user of agentic coding tools and excited to push the frontier of how AppSec gets done with them
- Clear communicator who documents decisions, writes tickets engineers want to pick up, and lifts the team's security awareness without becoming a bottleneck
- prior experience with LLM red-teaming, agent sandbox research, or shipping security-focused open source libraries