AWSCloudPySparkPythonPyTorchScikit-LearnSQLTensorflowMachine LearningMLDeep LearningLarge Language ModelsRAGTensorFlowscikit-learnMLOpsMLflowKubeflowDatabricksSageMakerBedrockCI/CD
About this role
Role Overview
Design and develop end-to-end ML solutions — from data exploration and feature engineering to model training, validation, and deployment.
Collaborate cross-functionally with engineers, analysts, and product teams to integrate predictive and recommendation models into customer-facing and internal applications.
Implement scalable ML pipelines using Databricks, PySpark, and Delta Lake, ensuring reproducibility, performance, and maintainability.
Run controlled experiments (A/B tests, uplift modelling, causal inference) to measure model performance and quantify business impact.
Operationalise models through CI/CD and MLOps best practices, including model versioning, monitoring, retraining strategies, and governance.
Monitor production systems for drift, performance degradation, and anomalies, applying explainability and fairness techniques where needed.
Contribute to the development of feature stores and reusable data assets to accelerate experimentation and deployment cycles.
Stay current with emerging trends in ML, MLOps, and cloud data technologies to continuously improve model accuracy, scalability, and efficiency.
Requirements
Extensive hands-on experience applying machine learning and statistical modelling in production or product-oriented environments.
Proven understanding of the full spectrum of ML techniques — from traditional models (linear/logistic regression, tree-based methods, ensemble learning) to modern deep learning architectures (CNNs, RNNs, transformers, graph neural networks, diffusion and foundation models).
Demonstrated ability to design scalable ML pipelines and automate workflows with MLOps tools (MLflow, Kubeflow, Databricks ML runtime, AWS Sagemaker, or AWS Bedrock).
Preferred experience in Python, with proficiency in Scikit-learn, Autogluone, PyTorch or TensorFlow, and PySpark MLlib.
Familiarity with retrieval-augmented generation (RAG) and fine-tuning of large language models is a plus.
Proficiency in SQL and distributed data frameworks, with experience in feature engineering at scale.