Semantic Model Architecture : Design and evolve the semantic modeling layer that serves as the single source of truth for metrics, dimensions, entities, and business logic across all data products.
Data Product Enablement : Architect how the semantic layer is consumed across distinct product surfaces such as Reporting (high-performance BI platform for customers), and Agentic Analytics (metadata-rich, discoverable interfaces that enable AI agents to reason over and query the semantic layer).
Performance & Scale : Own query performance, materialization strategies, pre-aggregation patterns, and cost optimization. Ensure the semantic layer is highly performant and scalable as data volumes and consumer demand grow.
Platform & Governance : Build the semantic layer as a true platform experience: self-service metric onboarding, developer-friendly abstractions, clear documentation, data validation, and governance guardrails.
Technical Leadership : Operate as a technical leader across the Data & Reporting Platform organization.
AI-Augmented Engineering : Use AI coding tools (Claude, Cursor, Copilot) as a core part of your daily workflow.
Requirements
10+ years of experience in Software Engineering or Data Engineering roles, including experience with large-scale, high-traffic, fault-tolerant systems.
Deep experience with semantic modeling, data engineering, data lakehouse, and data product development.
Track record of building platform-level abstractions consumed by multiple product teams.
Strong experience with the DBT ecosystem.
Expert-level SQL and Python skills.
Experience with query optimization, materialization strategies, and performance tuning at scale.
Experience with modern data platform technologies: Snowflake, ClickHouse, or similar OLAP/columnar engines.
Familiarity with Spark and streaming platforms (Kafka, Kinesis).
Experience designing APIs and interfaces for domain specific data products.
Demonstrated proficiency with AI coding tools (eg Claude, Cursor) as part of your regular engineering workflow; not just familiarity, but active daily use.
Strong communication and technical writing skills.
Ability to empathize with users and champion for their experience.
B.S., M.S., or PhD in Computer Science or a related field.
Highly preferred Experience building semantic layers that serve both human analysts and programmatic/AI consumers.
Experience with data governance frameworks, metric versioning, or data product catalogs.
Familiarity with LLM-friendly data interfaces; designing schemas and metadata that enable AI agents to discover and query data effectively.
Experience with data validation and quality frameworks (e.g., Monte Carlo, Great Expectations).
To effectively support our international teams, this position requires flexibility to overlap with US working hours as needed.
Tech Stack
Kafka
Python
Spark
SQL
Benefits
Be Human With Us: Being human isn’t about checking every box on a list. It’s about the experiences we have, people we meet, and the perspectives we share. So, if you have the skills but are hesitant to apply because of your background, apply anyway. We need amazing people like you to help us challenge the conventional and think differently about the problems that we’re solving. We’re in this together. Come be human, with us.
Use of AI Technology: We use technology, including automated and AI-assisted tools, to support certain aspects of our recruitment process. These tools are designed to improve efficiency and enhance the candidate experience. AI tools are not used to make hiring decisions; all hiring decisions are made by our hiring teams. At ServiceTitan, we celebrate individuality and uniqueness. We believe that the convergence of fresh perspectives and experiences from all walks of life is what makes our product and culture so great. We do not discriminate against employees based on race, color, religion, sex, national origin, gender identity or expression, age, disability, sexual orientation, or any other characteristic protected by applicable laws.