Build and ship applied AI features with the team, contributing from early design through production rollout and ongoing improvements.
Translate user feedback and research prototypes into clear technical requirements and pragmatic delivery plans.
Build LLM-powered workflows such as tool-calling agents, structured output pipelines, retrieval/tool integrations, and safe prompting strategies.
Write production-grade code with clear abstractions, stable interfaces, and robust error handling.
Learn and apply strong testing practices, including unit tests, integration tests, and golden/regression test suites, with guidance from the team.
Help add instrumentation for debugging and iteration, including logging, tracing, and metrics for AI features.
Learn and follow best practices for safe and reliable AI features, including rate limiting, safe input handling, and prompt-injection awareness.
Work closely with AI Engineers and researchers to productionize experiments, define deployment workflows from notebooks to production, and document best practices.
Requirements
1+ years of industry experience in software engineering (Python preferred), or relevant internship experience combined with strong open-source or project contributions.
Hands-on experience building with modern AI/LLM tools such as the OpenAI API, HuggingFace, or similar, and interest in productionizing prototypes.
Strong programming fundamentals, including design patterns, code structure, testing practices, and code review habits.
Familiarity with cloud environments (GCP or AWS) and containerization tools such as Docker.
Basic experience with API/service development (e.g., FastAPI/REST) and working in a collaborative codebase using Git and code reviews.
Exposure to CI/CD and software delivery workflows (e.g., GitHub Actions or similar) is a plus.
Interest in production reliability, including logging/monitoring, safe rollout practices, and responsible handling of failures.
Problem-solving mindset, with the ability to learn quickly and debug issues in real systems.
Exposure to deploying and operating AI/LLM systems in production or enterprise-like environments.
Clear communication skills and a collaborative approach, including the ability to understand user needs and discuss trade-offs with the team.
Tech Stack
AWS
Cloud
Docker
Google Cloud Platform
Python
Benefits
Competitive compensation
Stock Options Plan: Empowering you to share in our success and growth.
Cutting-Edge Tools: Access to state-of-the-art tools and collaborative opportunities with leading experts in artificial intelligence, physics, hardware and electronic design automation.
Professional Growth: Opportunities to attend industry conferences, present research findings, and engage with the global AI research community.
Impact-Driven Culture: Join a passionate team focused on solving some of the most challenging problems at the intersection of AI and hardware.