AirflowCloudDockerGoogle Cloud PlatformKubernetesPythonTerraformAIMLGenAILarge Language ModelsRAGAgenticKubeflowLangGraphGCPGoogle CloudAgileCI/CD
About this role
Role Overview
Design and implement scalable, production-grade AI/ML solutions based on AI use case requirements from internal customers
Own agentic AI / ML system lifecycle from design to deployment, monitoring, and continuous improvement
Build and maintain software services to serve AI agents / ML models and integrate into KION’s software data ecosystem
Write clean, maintainable, and high-quality code, ensuring alignment to industry best practices
Develop and enforce best practices in system design, software architecture, API design and coding standards and security
Collaborate with data engineers and cloud engineers to establish AI/ML development environments for efficient model training, evaluation, streamline the model deployment process, including dependency and version management, and utilize cloud platforms (GCP preferred) when possible
Collaborate with business and IT to develop GenAI based applications, supporting RAG, multi-agent, multimodal systems leveraging modern frameworks
Build and maintain robust CI/CD workflows for AI/ML applications
Partner with AI enablement teams across the organization as well as KION IT Cloud Infrastructure, AI Platform, and security and data privacy teams to ensure AI/ML capabilities align with IT framework and guidelines within KION.
Requirements
BS/MS in Computer Science, Software Engineering, AI/ML, or a related field
3+ years of experience in AI / ML engineering, micro service and API development, or related fields
Proven experience deploying and supporting agentic AI / ML models into production systems with SLA
Strong experience with cloud platforms (preferably GCP) and related tools: Docker, Kubernetes, edge computing
Experience working in agile development environments, delivering incremental enhancements that drive customer value
Deep understanding of APIs, software integration, and system architecture in enterprise-scale environments
Hands-on experience with GenAI application development and performance evaluation
Working knowledge of common AI/ML algorithms, and LLMs (Large Language Models)
Proficiency in Python (preferred) or other general-purpose programming languages
Familiarity with orchestration tools: LangGraph, Airflow, Kubeflow, etc.
Strong DevOps and CI/CD tools
Proficiency in cloud-based architecture and experience with infrastructure automation (Terraform, Kubernetes).