Quorum Software is a rapidly growing company and industry leader in oil & gas transformation. They are seeking an AI/ML & Data Engineering Lead to establish their AI and machine learning practice on a new strategic data platform, combining hands-on architecture and cross-product coordination.
Responsibilities:
- Establish Quorum’s AI/ML practice and standards on top of the strategic data platform, defining shared patterns, reference architectures, and reusable components that enable AI capabilities across 30+ products
- Own end-to-end AI platform architecture, including curated datasets, feature/embedding pipelines, ML training workflows, LLM integrations, evaluation frameworks, and production-grade model deployment
- Design and deliver scalable data engineering foundations for AI, ensuring data quality, lineage, governance, access controls, and performance for ML workloads (e.g., Unity Catalog and data product patterns)
- Build practical, working prototypes that accelerate adoption—turning high-priority business problems into demonstrable AI solutions and hardening them into repeatable platform capabilities
- Partner with product and engineering leaders across teams to identify AI opportunities, define requirements, prioritize a roadmap, and integrate AI features into product backlogs using shared platform capabilities
- Guide teams on solution selection and feasibility, advising when to use classical ML vs. LLMs vs. simpler deterministic approaches, and making pragmatic build-versus-buy recommendations
- Lead ML/LLM integration patterns such as RAG architectures, vector search, prompt management, model routing, and safety/guardrail strategies—ensuring reliability, security, and maintainability
- Operationalize MLOps and AI quality practices, including CI/CD for ML, automated retraining, monitoring and observability, A/B testing, model performance evaluation, and drift detection
- Ensure production readiness and reliability of AI systems through robust deployment strategies, rollback plans, latency/cost controls, SLAs, and incident response collaboration
- Provide technical leadership and mentorship to product engineering teams adopting AI/ML, improving ML literacy, and enabling consistent engineering practices across global teams
- Translate business strategy into technical execution, communicating clearly with executive stakeholders, presenting tradeoffs and progress, and aligning AI initiatives to measurable outcomes
- Coordinate with external partners and vendors (including staff augmentation/consulting engagements when applicable), ensuring deliverables align with Quorum architecture, security standards, and product needs
- Drive cost and performance optimization across AI workloads (compute/storage, data pipeline efficiency, model inference costs), balancing capability with sustainable operating cost
- Maintain governance and compliance alignment for data and AI use, collaborating with security, legal, and platform leaders to support responsible use of customer and operational data
- Stay hands-on with cloud and tooling (Azure/AWS/GCP, Databricks, Python ML stack) to validate designs, unblock teams, and ensure platform decisions are grounded in real implementation constraints
Requirements:
- 5+ years in data engineering, ML engineering, or hybrid AI/data roles with significant production experience
- Excellent communication skills, comfortable presenting to executives and explaining technical concepts to non-technical stakeholders
- Databricks experience desired, hands-on work with Unity Catalog, ML pipelines, or data platform architecture
- Strong background in both ML model development and data pipeline engineering for ML workloads
- Proven ability to translate business problems into pragmatic AI/ML solutions
- Experience working across multiple product teams or engineering organizations
- Hands-on proficiency with at least one cloud platform (Azure, AWS, or GCP) and ML frameworks (Python, TensorFlow, PyTorch, scikit-learn)
- Experience establishing AI/ML practices from scratch in a multi-product organization
- Background defining when to use custom models versus LLMs versus simpler approaches
- Familiarity with LLM fine-tuning, RAG architectures, and vector databases
- Experience with time series forecasting, anomaly detection, or predictive analytics
- Knowledge of industrial operations or domain-specific AI applications
- Experience with MLOps practices: automated retraining, A/B testing, model performance evaluation
- Background in feature engineering, data pipeline optimization, and cost management for AI workloads
- Familiarity with agentic AI frameworks and orchestration patterns
- Experience managing or influencing staff augmentation or consulting engagements