Interos.ai is the supply chain risk intelligence company focused on building trusted and transparent supply chains. They are seeking a Lead Machine Learning Engineer to serve as a technical architect for their ML team, responsible for developing AI-powered agents and integrating AI solutions into their product.
Responsibilities:
- Develop AI-powered agents that integrate deeply into Interos.ai product experience
- Understanding of AI Lifecycle: Comprehensive knowledge of the AI system lifecycle, including problem definition, data acquisition, model training, system integration, and validation
- Evaluate, select, and integrate AI tools, frameworks, and platforms to ensure scalability, efficiency, and compliance
- Stay up to date with advancements in AI, LLMs, RAG, autonomous agents and orchestration frameworks to drive innovation
- Define best practices for agent design, security, and governance in AI-driven enterprise applications
- Work with product, engineering, and data science teams to design and implement AI-based automation solutions
- Identify new opportunities to embed AI into the Interos.ai platform
- Establish monitoring, feedback loops, and continuous learning mechanisms to improve agent performance over time
Requirements:
- 8+ years of experience in Software development
- 5+ years of experience in AI, machine learning, with a focus on enterprise applications
- Deep understanding of LLMs, AI agents, and orchestration frameworks (e.g., LangGraph)
- Experience with enterprise-grade AI architectures, API integration, and large-scale automation
- Proficiency in Python, cloud services (AWS), and AI model deployment
- Strong problem-solving skills and the ability to translate business needs into AI-powered solutions
- Experience in data privacy, security, and compliance for AI in enterprise environments
- Experience in Data lakes such as Snowflake and Databricks
- Masters/Doctorate degree in a relevant field, such as Computer Science, Mathematics, or Engineering
- Experience developing and deploying machine learning solutions using large-scale datasets, including specification design, data collection and labeling, model development, validation, deployment, and ongoing monitoring
- Hands-on experience with vector databases, retrieval-augmented generation (RAG), and fine-tuning LLMs
- Experience with fine-tuning models including identifying and curating datasets as well as experimenting with models for iterative improvement