Lead the design and operationalization of Babel Street’s AI trust framework across safety, privacy, security, bias/fairness, and transparency.
Create, institutionalize, and operationalize Babel Street AI Principles—translating them into policies, engineering standards, delivery gates, and reporting artifacts.
Partner with Product, Engineering, Security (CISO), Legal/Privacy, and Customer Success teams to ensure governance requirements are understood, adopted, and embedded into the AI lifecycle.
Translate emerging AI policy and customer requirements into concrete engineering controls and reusable collateral that accelerates RFI/RFP responses and supports due diligence.
Requirements
8+ years of experience across governance, risk, security, privacy, compliance, or systems architecture, including 3+ years supporting AI/ML-enabled systems in production.
Demonstrated ability to translate governance requirements into implementable controls and engineering practices (not policy-only work).
Strong understanding of AI governance frameworks and standards (e.g., NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894) and ability to translate requirements into organizational processes.
Experience producing customer-facing and audit-ready content: RFI/RFP responses, control narratives, assurance artifacts, and executive-ready summaries.
Strong stakeholder management and influence skills; able to drive decisions, build consensus, and maintain accountability across multiple teams.
Excellent written and verbal communication skills; able to craft clear, defensible positions for customers, executives, and internal teams.
Bachelor’s degree or equivalent experience in Computer Science, Engineering, Information Assurance, Data Science, Public Policy (with technical depth), or a related field required.