Collaborate with cross-functional teams to assist in the design and productionization of machine learning, ML ops, traditional AI, LLMs, and agentic AI solutions
Support the implementation of application integrations to leverage newly built machine learning, ML ops, traditional AI, LLMs, and agentic AI products
Assist in building solutions that improve the delivery speed and scalability of data and product pipelines
Under minimal supervision, work on leveraging managed and serverless cloud offerings for application solutions and data engineering pipelines
Support client-facing teams by preparing technical documentation and assisting in project communications
Stay up to date with the latest advancements in data science, machine learning, and AI technologies
Requirements
Bachelor's degree in Computer Science or a related field
Familiarity with ETL/ELT concepts and best practices in data engineering
Proficiency in programming languages such as Python and SQL (JavaScript a plus)
Experience or coursework with NLP and LLM-based technologies and frameworks
Exposure to a few major cloud or data platforms (e.g., AWS, GCP, Azure, Snowflake, or Databricks)
Eagerness to learn and adapt quickly in a fast-paced environment
Excellent verbal and written communication skills and a team-player attitude
Strong problem-solving and analytical skills
2–4 years of foundational experience in data engineering, data science, or ML/AI roles/internships
Tech Stack
AWS
Azure
Cloud
ETL
Google Cloud Platform
JavaScript
Python
SQL
Benefits
Fully remote
Flexible Schedule
Unlimited Paid Time Off (PTO)
Paid parental/bereavement leave
Worldwide recognized clients to build skills for an excellent resume