Build and evolve the data platform foundations that make it easy and safe for others to ship dbt work (project structure, environments, permissions, patterns, documentation)
Establish and maintain standards and guardrails for dbt development (testing strategy, source freshness, documentation, code review practices)
Improve the developer experience for data workflows, including CI/CD, automated checks, and repeatable deployment patterns
Partner with Analytics and Finance to define and deliver trusted metrics and dashboards in Looker
Build and maintain analytics-ready datasets that support self-serve reporting and experimentation
Develop and optimize BigQuery data models for analytics and product use cases
Implement ELT best practices in dbt, including testing, documentation, and versioning
Design, build, and maintain scalable data pipelines using Python and dbt
Ensure data quality, reliability, and observability for critical datasets and reporting
Optimize performance and cost across BigQuery and data pipelines
Integrate data workflows with backend services and APIs
Participate in infrastructure decisions related to data ingestion and platform evolution
Requirements
5+ years of experience as a Software Engineer with deep data platform experience
Strong programming skills in Python
SQL skills and experience with analytical data modeling
Hands-on experience with BigQuery (or a similar cloud data warehouse, with willingness to ramp)
Production experience with dbt (dbt Labs), including building models yourself
Experience building or improving the infrastructure around data workflows (reliability, observability, CI/CD, permissions, environments, deployment patterns)
Strong software engineering fundamentals (testing, version control, code reviews)
Experience with GCP services such as Cloud Storage, Cloud Functions, Cloud Run, Pub/Sub, Composer