Zillow is building the next generation of AI experiences for customers navigating real estate decisions. The Principal Machine Learning Engineer will set the technical direction for the Agentic Data Platform, architect scalable systems, and lead cross-organizational initiatives to enhance Zillow's agentic experiences.
Responsibilities:
- Set the technical direction. Define and own the multi-quarter architecture roadmap for the agentic data foundations (Context engineering, Agentic memory, and AI workflows) that power Zillow's agentic experiences
- Architect and ship at scale. Design, prototype, and ship systems that handle hundreds of millions of agent interactions with high availability, low latency, and predictable cost. Stay hands-on in code and production when it matters
- Drive cross-organization execution. Lead complex, multi-team initiatives across Agentic AI and Platform teams - aligning on architecture, surfacing dependencies, and driving outcomes through influence rather than direct authority
- Communicate to every level. Translate complex platform trade-offs, ambiguous customer problems, and emerging agentic paradigms into clear, actionable insights for engineering peers, product partners, Directors, and VPs
- Grow senior technical talent. Mentor Senior and Staff engineers, raise the bar on technical judgment and architecture decisions, and shape the engineering culture of the org
Requirements:
- 10+ years building, scaling, and operating large-scale data and ML infrastructure (production-grade pipelines, feature stores, and model-serving layers), with 1 to 2 of those recent years shipping agent-based or LLM-powered systems to production
- 3+ years as a technical leader spanning multiple organizations
- Hands-on experience designing and shipping agentic AI in production — orchestration, tool use, memory and context engineering, retrieval (embeddings, hybrid search, ranking) and evaluation
- You understand how LLM-based systems fail in production and how to engineer around it
- Platform engineering background in scaling and abstracting large-scale data and ML infrastructure
- Expert in distributed systems architecture, and operational excellence
- You've designed systems that hold up under massive scale and tight SLOs
- Expert-level Python; deep experience with agentic frameworks (LangGraph, LangChain, Agents SDK, AutoGen), large-scale data processing (Spark, Databricks, Airflow, Temporal), vector stores, and cloud Infrastructure (AWS preferred)
- Proven ability to set technical direction across organizational boundaries, build trust with engineering, science, and product leaders, and articulate complex trade-offs clearly to engineering peers and executives
- You drive outcomes through influence, not authority
- Advanced degree (M.S. or Ph.D.) in Computer Science, Machine Learning, or a related field, with emphasis on building distributed systems and AI
- Experience building data platform for agentic systems or real‑time AI applications
- Experience working with regulated, private, or sensitive data at scale
- Experience designing evaluation, tracing, or safety frameworks for LLM‑based production systems