Ensures AI applications, products, and solutions adhere to established company policies, regulatory requirements, and industry best practices
Conducts compliance assessments, performs risk evaluations, and reviews and assesses AI systems to identify potential issues both before and after deployment
Works closely with AI development teams, data scientists, and business stakeholders to support responsible AI practices while documenting findings and recommendations in clear, evidence based, and actionable reports
Performs compliance reviews of AI applications and products to assess alignment with internal policies, governance standards, and standard operating procedures, including verification of required documentation, approvals, and controls prior to production deployment
Conducts structured risk assessments of AI systems across their lifecycle, identifying and documenting risks related to bias, privacy, security, safety, model behavior, and regulatory non compliance; evaluate risk likelihood, impact, and adequacy of mitigation controls
Reviews model development practices, data handling procedures, deployment controls, and technical artifacts (e.g., model cards, system architecture documentation) to identify compliance gaps and discrepancies between documented capabilities and actual system behavior
Investigates AI system incidents, complaints, or governance concerns by analyzing system behavior, data flows, and decision logic; document investigative methods, evidence reviewed, and conclusions reached
Conducts hands on testing and probing of AI systems to validate documented claims regarding performance and behavior, and support ongoing monitoring of deployed systems
Tracks compliance and risk findings, remediation actions, and residual risk through maintained risk registers and supporting documentation; verify corrective actions are implemented and documented
Partners with AI development teams, product owners, and subject matter experts to gather information for assessments and investigations, and prepare clear findings, executive summaries, and recommendations for leadership and governance stakeholders
Monitors trends in compliance and risk findings to identify systemic issues and support continuous improvement of AI governance practices; stay current with evolving AI regulations, standards, and industry best practices
Performs other duties as assigned
Complies with all policies and standards.
Requirements
Bachelor's Degree Information Systems, Risk Management, Compliance, Data Science, or a related field or equivalent experience required
Master's Degree preferred
3+ years progressively responsible experience in compliance, risk assessment, audit, or quality assurance, preferably supporting technology or AI enabled systems required
Working knowledge of AI and machine learning concepts, including supervised learning, generative AI, and model evaluation approaches required
Demonstrated ability to analyze technical documentation, identify compliance and risk issues, and gather supporting evidence required
Experience supporting risk assessments and drafting findings with actionable recommendations required
Understanding of the AI lifecycle as defined by NIST preferred
Proven track record creating and maintaining thorough documentation required
Benefits
competitive pay
health insurance
401K and stock purchase plans
tuition reimbursement
paid time off plus holidays
flexible approach to work with remote, hybrid, field or office work schedules