Design, build, and operate AI-assisted vulnerability management workflows across Eclipse Foundation open source projects
Build pipelines and integrate AI-assisted analysis into developer and CI/CD workflows
Evaluate findings critically and reduce false positives
Collaborate with project maintainers to land real fixes
Deliver measurable improvements in how the Foundation discovers, prioritizes, and resolves security issues
Help define safe and appropriate use of AI tooling
Produce internal playbooks, technical write-ups, and metrics dashboards
Participate in vulnerability disclosure processes, CVE management, and security advisories as needed
Requirements
Degree in software engineering, computer science, cybersecurity, or a related field is welcome
Equivalent practical experience is highly valued
Strong application security background
Familiarity with common vulnerability classes such as OWASP Top 10 and CWE
Secure coding practices and practical exploitability analysis
Hands-on experience conducting security code reviews, audits, or assessments using SAST, DAST, SCA, dependency scanning, or other code analysis tools
Ability to build and integrate developer-facing tooling using languages such as Python, Java, TypeScript, or similar
Practical experience applying LLMs or AI-assisted tools to code analysis, vulnerability research, developer productivity, or security automation
Familiarity with open source development workflows, including Git, GitHub or GitLab, pull requests, issue tracking, and CI/CD
Strong written communication skills, including the ability to write actionable security findings, advisories, issues, and remediation guidance for maintainers with varying security backgrounds.