ServiceTitan is a company that powers growth through data and BI products. They are seeking a Principal Engineer to lead the semantic model architecture for their data platform, focusing on data modeling, governance, and consumption across multiple product surfaces.
Responsibilities:
- Design and evolve the semantic modeling layer that serves as the single source of truth for metrics, dimensions, entities, and business logic across all data products
- Define the standards for how semantic models are authored, versioned, tested, and governed
- Evaluate and drive the semantic layer technology strategy (e.g., dbt MetricFlow or equivalent)
- Architect how the semantic layer is consumed across three distinct product surfaces: Data Sharing (governed datasets for customers), Reporting (high-performance BI platform for customers), and Agentic Analytics (metadata-rich, discoverable interfaces that enable AI agents to reason over and query the semantic layer)
- Partner with adjacent teams to ensure the semantic layer meets each product’s unique requirements
- Own query performance, materialization strategies, pre-aggregation patterns, and cost optimization
- Ensure the semantic layer is highly performant and scalable as data volumes and consumer demand grow
- Build the semantic layer as a true platform experience: self-service metric onboarding, developer-friendly abstractions, clear documentation, data validation, and governance guardrails
- Make it easy for other teams to extend the semantic layer without compromising consistency or quality
- Operate as a technical leader across the Data & Reporting Platform organization
- Participate in and drive design sessions across teams
- Mentor engineers, manage stakeholder and leadership alignment
- Contribute to architecture decisions that span from data foundations through reporting and analytics
- Champion high-quality code with corresponding test coverage
- Use AI coding tools (Claude, Cursor, Copilot) as a core part of your daily workflow
- Drive adoption patterns, build team-specific contexts and workflows, and set the standard for how the team multiplies velocity through AI-assisted development
Requirements:
- 10+ years of experience in Software Engineering or Data Engineering roles, including experience with large-scale, high-traffic, fault-tolerant systems
- Deep experience with semantic modeling, data engineering, data lakehouse, and data product development. Track record of building platform-level abstractions consumed by multiple product teams
- Strong experience with the DBT ecosystem. Experience with semantic layer technologies (e.g., dbt MetricFlow or similar) is highly preferred
- Expert-level SQL and Python skills. Experience with query optimization, materialization strategies, and performance tuning at scale
- Experience with modern data platform technologies: Snowflake, ClickHouse, or similar OLAP/columnar engines. Familiarity with Spark and streaming platforms (Kafka, Kinesis)
- Experience designing APIs and interfaces for domain specific data products
- Demonstrated proficiency with AI coding tools (eg Claude, Cursor) as part of your regular engineering workflow; not just familiarity, but active daily use
- Experience leading the architecture and design of systems (architecture, design patterns, reliability, and scaling)
- Strong communication and technical writing skills. Ability to empathize with users and champion for their experience
- B.S., M.S., or PhD in Computer Science or a related field
- Experience building semantic layers that serve both human analysts and programmatic/AI consumers
- Experience with data governance frameworks, metric versioning, or data product catalogs
- Familiarity with LLM-friendly data interfaces; designing schemas and metadata that enable AI agents to discover and query data effectively
- Experience with data validation and quality frameworks (e.g., Monte Carlo, Great Expectations)