EDB is a data and AI platform provider that enables organizations to harness the full power of Postgres. As a Staff Security Engineer, you will design and implement security architectures, lead application security initiatives, and collaborate with engineering teams to enhance EDB's security posture.
Responsibilities:
- Lead cross-functional application security initiatives to identify, prioritize, and mitigate security risks across EDB's products
- Write and review code to build security automation and tooling that serves the full InfoSec organization accelerating the team's ability to detect, respond, and remediate
- Build & orchestrate security agents deploying AI-driven security tools using LLMs and orchestration frameworks (LangChain) to automate threat modeling, alert triaging, and code analysis
- Partner with internal teams to implement security guardrails for internal AI applications, focusing on prompt injection mitigation, data leakage prevention, and secure architectures
- Integrate AI tools into the SDLC to perform automated architectural risk assessments, security reviews, and identify vulnerabilities in generated code or toolsets
- Design and integrate complex security architectures across cloud and on-premise environments, strengthening EDB's overall defense posture against advanced threats
- Lead vulnerability disclosure investigations, coordinating with engineering teams to assess impact, validate findings, and drive timely remediation
- Embed security into the software development lifecycle through secure design reviews, code review, threat modeling, and ongoing partnership with engineering and product teams. Build trust with development teams by meeting them where they are, respecting their workflows, and delivering clear guidance throughout implementation
- Deliver security solutions as minimum valuable products, starting with the smallest solution that provides the needed value and iterating over time as capacity allows
- Drive continuous improvement of security tooling, detection capabilities, and monitoring infrastructure
Requirements:
- A developer-centric background with demonstrated ability to write and review production-quality code in Python, Go, or a comparable language
- Hands-on LLM engineering with proven experience working with LLM APIs (Anthropic Claude, OpenAI) and 'AI-as-a-Service' kits to build functional internal tools or security automations
- Deep understanding of the OWASP Top 10 for LLMs, including risks like prompt injection, insecure output handling, and training data poisoning
- Ability to craft complex, multi-shot prompts and system instructions to ensure AI security agents provide high-fidelity, low-noise results
- Proven experience leading cross-functional application security initiatives in complex, distributed environments
- Demonstrated experience leading vulnerability disclosure investigations, including impact assessment, coordination with engineering teams, and driving remediation. (You don't need to be able to write novel exploits — you need to assess risk and drive fixes.)
- Proven ability to build trust with development teams: reviewing their code, engaging in their design discussions, and partnering as a peer rather than a gatekeeper
- Strong communication skills with the ability to influence cross-functional stakeholders, translate technical security concerns into business risks, and negotiate priorities with partner teams to get security initiatives on shared roadmaps
- An empathetic, collaborative approach to working with partner teams, respecting their processes and assuming the best while still driving accountability for security outcomes
- Demonstrated ability to balance long-term security architecture initiatives with day-to-day operational security needs, delivering incremental value rather than waiting for large, all-at-once solutions
- An AI-first approach to problem solving and security, leveraging AI tools and techniques to accelerate delivery, automate security workflows, and enhance decision-making
- Interest in growing into a broader InfoSec role over time, taking on expanded scope and influence across the organization
- Familiarity with AI Red Teaming or using LLMs to simulate adversarial attack paths
- Experience with database security, particularly PostgreSQL or other relational database systems
- Knowledge of the MITRE ATT&CK Framework, attack chains, and attack path mapping
- Experience developing and delivering security awareness training programs at an organizational level
- Experience writing and reviewing C
- Contributions to open-source AI security projects or frameworks
- Expertise in one or more compliance frameworks: SOC 2, PCI, HIPAA, FedRAMP (800-53), ISO 27001