Our client is looking for a Google Cloud Platform AI/ML Engineer Consultant with a Contract project in Remote below is the detailed requirement.
Job Title: Google Cloud Platform AI/ML Engineer (12+Yrs)
Location: Remote
Contract
Responsible for designing, building, and deploying machine learning models and AI-driven systems within the Google Cloud ecosystem. This role bridges data science and software engineering, focusing on creating scalable, production-ready AI solutions—such as Generative AI, natural language processing, and predictive models—using tools like Vertex AI, TensorFlow, and BigQuery.
Job description:
- Education: Bachelor’s or Master’s degree in Computer Science, AI, Machine Learning, or a related fieldwith minimum 12+ Years of experience
- Experience: 5+ years in AI/ML model deployment and software engineering.
- Technical Proficiencies: Strong programming skills in Python and SQL.
- Google Cloud Platform Expertise: Proven experience with Google Cloud Platform, specifically Vertex AI, Dataflow, and BigQuery.
- ML Frameworks: In-depth knowledge of TensorFlow, PyTorch, or Scikit-learn.
- DevOps/Containerization: Proficiency with Docker, Kubernetes (GKE), and CI/CD tools.
Preferred Qualifications
- Google Cloud Platform Professional Machine Learning Engineer certification.
- Experience with Vertex AI agent builder
- Background in Natural Language Processing (NLP) or Computer Vision
Key Responsibilities
- Model Development & Training: Develop and train predictive and generative AI models using Python and frameworks such as TensorFlow, PyTorch, or Scikit-learn, often within Vertex AI.
- Google Cloud Platform Implementation: Implement solutions using Google Cloud Platform services like BigQuery, Dataflow, Cloud Functions, and Vertex AI Pipelines to build scalable infrastructure.
- MLOps and Automation: Design and automate MLOps pipelines (training, deployment, monitoring) to ensure model performance, scalability, and reliability.
- Data Engineering: Construct data pipelines for ingestion, preprocessing, and storage of structured/unstructured data using SQL and BigQuery.
- Generative AI Integration: Implement LLMs, retrieval-augmented generation (RAG) patterns, and agentic workflows (e.g., using LangChain).
- Optimization & Troubleshooting: Monitor and optimize deployed models for accuracy, latency, and cost-effectiveness.