AIMLDeep LearningNatural Language ProcessingGenerative AIMLOpsRisk Management
About this role
Role Overview
Design, implement, and continuously improve enterprise-wide AI governance frameworks that align with regulatory requirements, industry best practices, and organizational values
Establish and chair AI governance committees, including AI Ethics Boards and Model Risk Management forums, to oversee AI development, deployment, and monitoring
Develop responsible AI principles, policies, and standards addressing fairness, transparency, explainability, accountability, privacy, and safety
Create and maintain AI governance documentation including charters, playbooks, standard operating procedures, and decision frameworks
Ensure AI systems comply with financial services regulations including FCRA, ECOA, fair lending laws, model risk management guidance (SR 11-7), and emerging AI-specific regulations
Develop and implement AI risk assessment frameworks to identify, measure, monitor, and mitigate risks including algorithmic bias, model drift, data quality issues, and adversarial threats
Collaborate with Legal, Compliance, and Risk Management teams to interpret regulatory guidance and translate requirements into actionable technical controls
Lead regulatory examinations and audits related to AI systems, preparing documentation and responding to examiner inquiries
Establish model validation standards and processes for AI/ML models across the organization, including testing for bias, fairness, and robustness
Implement AI model inventory and lifecycle management systems to track models from development through deployment and retirement
Define and implement explainability and interpretability requirements for AI systems, ensuring stakeholders can understand model decisions
Partner with technology teams to embed governance controls in AI development platforms, MLOps pipelines, and production environments
Build partnerships across business units, technology, risk management, compliance, legal, and human resources to embed responsible AI practices
Develop and deliver training programs on responsible AI, AI ethics, and governance requirements for technical and non-technical audiences
Establish metrics and KPIs to measure the effectiveness of AI governance programs and responsible AI practices
Develop ongoing monitoring capabilities for deployed AI systems to detect performance degradation, bias drift, and compliance issues
Stay current with emerging AI technologies, governance frameworks, regulatory developments, and industry best practices, adapting organizational approaches accordingly
Requirements
10+ years of progressive experience in AI governance, Responsible AI, AI Ethics, or related fields, with at least 5 years in Financial Services
Deep understanding of AI/ML technologies, including supervised and unsupervised learning, deep learning, natural language processing, and generative AI
Demonstrated expertise in developing and implementing AI governance frameworks in regulated financial services environments
Comprehensive knowledge of financial services regulations affecting AI systems, including model risk management, fair lending, consumer protection, and data privacy laws
Proven experience with AI risk management, including bias detection and mitigation, model validation, and adversarial robustness testing
Experience managing regulatory examinations, audits, or third-party assessments related to AI/ML systems
Bachelor’s degree in computer science, Data Science, Engineering, or related technical field required