Pinterest is a platform dedicated to inspiring creativity and innovation among its users. They are seeking a Staff Product Manager for the GenAI Safety team to define and drive product strategies that ensure the safety and trustworthiness of AI-powered systems while collaborating with various teams to implement effective safety measures.
Responsibilities:
- Own and drive the product roadmap for GenAI safety across Pinterest's AI-powered surfaces, including assisted search, content recommendations, automated moderation, and generative content creation tools
- Lead proactive identification of risks, failure modes, and adversarial attack vectors across AI systems - designing structured red-teaming exercises and evaluation frameworks before and after product launches
- Partner closely with Trust & Safety policy, legal, and ethics teams to translate nuanced content guidelines (e.g., self-harm, misinformation, body image) into precise, buildable product requirements and model guardrails
- Work with engineering, ML, design, data science, policy, legal, comms, and operations teams to define, align, and ship AI safety solutions across global markets and diverse user populations
- Define and track quantitative safety metrics - including fairness audits, false positive/negative rates, disparate impact analysis, and content harm reduction - to ensure AI systems meet safety standards at scale
- Develop and maintain AI safety incident runbooks and escalation frameworks, and lead rapid triage and remediation when AI systems produce harmful or unexpected outputs
- Stay ahead of the rapidly evolving AI landscape to identify safety implications of new capabilities (e.g., multi-modal generation, synthetic media, agentic AI) and proactively build extensible safety infrastructure to address unknown future applications
- Ensure AI safety approaches account for the needs, norms, and contexts of Pinterest's diverse global user base - avoiding one-size-fits-all solutions and centering equity in safety design
- Champion the safety and psychological wellbeing of both users who encounter harmful content and internal teams (content reviewers, T&S specialists) who work on the front lines of content safety
Requirements:
- 7+ years of product management experience, with meaningful depth in GenAI/ML, trust & safety, content moderation, or responsible AI
- Strong fluency in AI/ML concepts - including generative models, recommendation systems, multi-modal AI, and reinforcement learning from human feedback (RLHF)
- Experience with AI ethics frameworks, responsible AI principles, or relevant regulatory landscapes (e.g., NIST AI RMF, EU AI Act)
- Demonstrated ability to lead cross-functional teams through ambiguous, high-stakes problem spaces with a bias for action
- Proficiency in engaging with research, mapping threat models, validating risks, and translating insights into clear product strategies and roadmaps
- Excellent communication skills - the ability to articulate complex technical and ethical trade-offs to non-technical audiences and senior leadership, facilitating clear decision-making
- Deep empathy for users and a genuine commitment to making the internet safer
- Bachelor's degree in a relevant field such as Computer Science, or equivalent experience