Design, build, and prototype AI-powered tools from scratch to improve data engineering and analytics workflows (e.g., dbt development, query generation, documentation, testing, monitoring) **
Experiment with and evaluate AI/LLM-based approaches to accelerate development, improve data quality, and reduce manual effort **
Identify opportunities to create new internal tools or automations that enhance team productivity, and take them from idea → prototype → iteration **
Collaborate with data engineers, analysts, and stakeholders to turn ambiguous problems into practical, working solutions **
Support the development, optimization, and maintenance of end-to-end data workflows—including pipelines, data models, and transformations—while ensuring data quality, reliability, and clear documentation
Requirements
Currently pursuing a degree in Computer Science, Data Science, Engineering, Mathematics, or a related field from an accredited university **
Must be available at least 25 hours per week for 10 weeks during the summer internship program **
Must be able to work in a hybrid environment, with the ability to work in person at our Glendale, CA headquarters a minimum of two days per week **
Strong SQL skills and basic understanding of data modeling concepts **
Some experience with Python or another scripting language **
Interest in AI/ML tools and a curiosity for applying AI to real-world problems **
Interest in modern data stack tools like dbt, Airflow, Snowflake, or similar (hands-on experience a plus) **
Strong analytical and problem-solving skills **
Detail-oriented, curious, and eager to learn Preferred Qualifications:
Exposure to cloud platforms (AWS, GCP, or Azure)**
Experience with AI IDE (Cursor, Claude Code, Codex) **
Hands-on experience with AI tools and LLM APIs (e.g., OpenAI, LangChain, etc.) **
Experience building small tools, side projects, or automations (data or AI-related a plus)**
Builder mindset: enjoys creating, experimenting, and iterating on ideas