Manage model versioning, experiment tracking, deployment, rollback, and monitoring.
Work across Bronze, Silver, and Gold layers to ensure model inputs are trustworthy and business outputs are usable.
Implement data quality checks, feature readiness checks, and lineage/governance controls before models go live.
Enable consumption of predictions and model outputs in Power BI dashboards or semantic models.
Build reusable Python-based automation for data preparation, feature engineering, model deployment, and operational support.
Enable model scoring, batch inference, and operationalization within Azure or Fabric-based environments.
Support CI/CD practices for machine learning assets, notebooks, pipelines, and related code repositories.
Collaborate with data scientists to move models from experimentation into production.
Define and enforce standards for versioning, lineage, reproducibility, and governance.
Ability to understand source-data breakages, schema drift, feature leakage, data freshness gaps, model performance decay, bias, and report mismatches between model output and consumption layer
Requirements
Experience with Azure Machine Learning, Fabric notebooks, Dataflows, Lakehouse, or equivalent services.
Knowledge of Spark or distributed processing.
Exposure to feature stores, model registry, and experiment tracking.
Strong experience in Python for automation, orchestration, and ML workflows.
Hands-on experience with Microsoft Azure or Microsoft Fabric.
Working knowledge of ML-Ops practices, including deployment, monitoring, version control, and release management.
Experience supporting data science solutions in production environments.
Good understanding of SQL, data validation, and data pipeline concepts.
Familiarity with Power BI or semantic/reporting layer integration.
Experience with Git-based development and CI/CD workflows.
Strong problem-solving skills and the ability to anticipate operational issues before they impact users.