Develop and maintain Python-based applications, integrating LLMs and AI-powered solutions
Collaborate with cross-functional teams (product managers, software engineers, and data teams) to understand requirements and translate them into data-driven solutions
Assist in the development, testing, and optimization of AI-driven features
Optimize performance and scalability of applications utilizing LLMs
Debug and resolve Python application errors, ensuring stability and efficiency
Conduct exploratory data analysis and data cleaning to prepare raw data for modeling
Optimize and maintain data storage and retrieval systems for model input/output
Research and experiment with new LLM advancements and AI tools to improve existing applications
Document workflows, model architectures, and code to ensure reproducibility and knowledge sharing across the team
Requirements
Bachelor’s degree in Computer Science, Software Engineering, or a related field with 2+ years of relevant experience
Strong proficiency in Python, including experience with frameworks like FastAPI/ Flask, or Django
Understanding of fundamental AI/ML concepts, algorithms, techniques and implementation of workflows
Familiarity with DevOps/MLOps practices and tools for managing the AI/ML lifecycle in production environments
Understanding of LLM training processes and data requirements.
Experience in LLM fine-tuning, RAG and prompt engineering
Hands-on experience with LLMs (e.g., OpenAI GPT, Llama, or other transformer models) and their integration into applications(e.g. LangChain or Llama Stack)
Familiarity with REST APIs, data structures, and algorithms
Strong problem-solving skills with the ability to analyze and debug complex issues
Experience with Git, CI/CD pipelines, and Agile methodologies
Experience working with cloud-based environments (AWS, GCP, or Azure) is a plus
Knowledge of vector databases (e.g., Pinecone, FAISS, ChromaDB) is a plus.