DockerGraphQLKubernetesNumpyPandasPythonScikit-LearnAIMachine LearningMLGenAILLMLarge Language Modelsscikit-learnNumPyCommunication
About this role
Role Overview
Collaborate with product, data, and engineering teams to define, design, and integrate AI features seamlessly into the user experience.
Build and maintain scalable ML pipelines and infrastructure balancing speed, flexibility, security, and scalability.
Build and optimize LLM-powered features including natural language query generation, data summarization, and data exploration tools.
Design and implement prompt engineering strategies, evaluation frameworks, and iterate on model performance.
Fine-tune Large Language Models (LLMs) or custom architectures for performance, latency, and cost-efficiency.
Stay up-to-date with the latest research (e.g., papers from NeurIPS, ICML) and identify opportunities to apply state-of-the-art techniques to business problems.
Requirements
5+ years of professional experience in software engineering with a focus on Machine Learning or AI.
Strong proficiency in Python
Strong familiarity with libraries such as NumPy, Pandas, and Scikit-learn.
Prompt engineering and evaluation frameworks.
LLM/GenAI: experience hardening and productionalizing applications based on GenAI.
Experience designing/consuming REST/GraphQL APIs and deploying applications using containerization ecosystems (Docker, Kubernetes).
English C1: able to participate in technical calls and write clear communication.
Degree in Computer Science, Mathematics, related field, or equivalent real-world experience.