Design, build, and deploy generative AI applications using Google Gemini (Pro, Ultra, Flash), PaLM 2, and other Google-hosted foundation models via Vertex AI.
Implement Retrieval-Augmented Generation (RAG) architectures using Vertex AI Search, Vector Search, and document embedding pipelines for enterprise knowledge retrieval.
Develop multi-modal AI capabilities leveraging Gemini's vision, text, and code understanding for hospitality use cases such as guest experience, analytics, and operations.
Build and maintain agentic AI workflows and orchestration using LangChain, LlamaIndex, or Google Agent Builder — integrating tools, APIs, and enterprise data sources.
Optimize prompt engineering strategies, system instructions, and grounding mechanisms for production-grade LLM deployments.
Develop end-to-end ML pipelines from data ingestion and feature engineering through model training, evaluation, and production deployment on Vertex AI Pipelines / Kubeflow.
Apply advanced data science techniques — statistical modelling, time-series forecasting, recommendation systems, and anomaly detection — for hospitality and gaming analytics.
Build scalable data transformation and feature engineering workflows using BigQuery, Dataflow, and Pub/Sub.
Implement model monitoring, drift detection, and automated retraining strategies to ensure sustained model performance in production.
Leverage TensorFlow, JAX, or PyTorch for custom model development where pre-trained solutions are insufficient.
Architect and manage cloud-native AI infrastructure on GCP — including Vertex AI, BigQuery ML, Cloud Run, GKE, Cloud Functions, and Cloud Storage.
Design secure, scalable, and cost-optimized GCP environments aligned with enterprise compliance requirements and CLIENT's data governance standards.
Implement CI/CD pipelines for ML model serving using Cloud Build, Artifact Registry, and Vertex AI Model Registry.
Set up monitoring, observability, and alerting for AI/ML workloads using Cloud Monitoring, Cloud Logging, and custom dashboards in Looker.
Build agents with Google ADK, deploy with Cloud Run / Agent Engine with Vertex.
Design and implement conversational AI agents using Dialogflow CX and Agent Builder for guest-facing and internal automation use cases.
Partner with CLIENT's business and technology stakeholders to define AI use cases, prioritize the roadmap, and translate requirements into technical deliverables.
Champion responsible AI practices — model fairness, explainability, content safety, and data privacy — across all AI solution designs.
Produce and maintain technical documentation including architecture decision records (ADRs), API specs, model cards, and runbooks.
Mentor junior engineers and lead knowledge-sharing sessions; contribute to AI community of practice within the delivery organization.
Requirements
Bachelor's or Master's degree in Computer Science, Engineering, Data Science, Artificial Intelligence, or a related technical field.
Hands-on engineering experience with at least 3+ years focused on AI/ML or LLM development on cloud platforms.
Strong proficiency in Python; working knowledge of SQL; familiarity with infrastructure-as-code tools (Terraform, Cloud Deployment Manager) preferred.
Google Cloud Professional certifications (Cloud Architect, ML Engineer, Data Engineer)