Work across diverse GenAI platforms like AWS, Salesforce, Oracle, Snowflake, MS Copilot, and other 3rd party GenAI platforms and libraries.
Automate workflows involving extraction of complex, multimodal unstructured content from variety of sources in to highly accurate and reliable structured content using platforms like AWS Textract and Bedrock
Design and build MCP hosts, clients and servers
Establish and use frameworks for automated LLM testing
Create regression test suites to detect drift or prompt breakage
Integrate with internal and external web services using secure authentication and authorization mechanisms
Adopt and ensure safe practices to protect against prompt injections, jailbreaks, and conform to enterprise security guidelines
Design, develop, and deploy production-grade traditional ML models (e.g., regression, classification, clustering, recommender systems) for a variety of business use cases.
Design, maintain, and optimize end-to-end AI/ML pipelines including data ingestion, training, evaluation, deployment, and monitoring on cloud infrastructure (e.g., AWS or equivalent)
Ensure AI/ML solutions are scalable, reliable, secure, and cost-effective within cloud environments
Create reusable components, frameworks, and best practices to accelerate AI development
Partner with data scientists, architects, product managers, business stakeholders and technical teams across organization to align AI solutions with organizational goals.
Provide hands-on technical support and mentorship to technical teams across the enterprise.
Requirements
Bachelor’s degree in computer science, Engineering, or related field.
3+ years of experience designing and deploying ML/AI solutions in real-world environments
Very strong Python skills with strong hands-on experience with LLM APIs (OpenAI, Azure OpenAI, Gemini, Anthropic, etc.) using Python and Python based frameworks
Strong hands-on experience in prompt engineering, context construction, grounding strategies
Strong hands-on experience with Retrieval Augmented Generation (RAG) extracting, chunking and create embeddings from unstructured documents from diverse sources including O365(email, word, excel), PDFs, and webpages.
Comfortable building Model Context Protocol (MCP) clients, servers and hosts.
Strong Expertise in building REST APIs and integrating with internal/external APIs
Hands-on experience with Intelligent Document Processing and/or OCR technologies on complex documents
Knowledge of Google A2A
Deep experience in AWS (Lambda, Bedrock, Step Functions, API Gateway, IAM)
Strong experience with Observability tools like Dynatrace, or other similar GenAI observability tools
Excellent GenAI foundations and concepts
Clear understanding of enterprise data privacy, AI governance, and observability
Proficiency in Python and common ML/AI libraries (e.g., TensorFlow, PyTorch, scikit-learn).
Strong understanding of data engineering, SQL, and feature engineering.
Hands-on experience with cloud services such as AWS Sagemaker, Lambda, ECS, S3, and IAM.
Familiarity with containerization (Docker) and orchestration (e.g., Airflow, Kubeflow).
Working with version control and collaboration tools (Git, Jira, Confluence etc).