Instacart is transforming the grocery industry by providing essential services to customers and flexible opportunities for personal shoppers. They are seeking an Analytics Engineer to build and maintain data models and pipelines that support marketing efficiency and ROI, collaborating closely with various teams to enable self-serve analytics and drive decision-making.
Responsibilities:
- Design, build, and maintain robust, production-grade data models (e.g., in dbt) that power incentives, promotions, and lifecycle analytics, including standardized fact/dimension tables and a consistent metrics layer
- Partner with Data Engineering to model source data from multiple systems (e.g., marketing platforms, event streams, transactional data) and implement efficient, auditable ELT patterns in a modern cloud warehouse
- Define and operationalize KPI and metric definitions for marketing efficiency and ROI; enable self-serve analytics in BI tools by implementing clean, documented semantic models and LookML (or equivalent)
- Set and enforce data quality standards with automated testing, lineage, documentation, and monitoring to ensure stakeholders can trust dashboards and analyses used to manage millions in annual spend
- Collaborate with Product, Marketing, and Engineering to scope requirements, prioritize a roadmap, and deliver high-impact datasets for experimentation, attribution, cohorting, and lifecycle performance reporting
- Continuously improve performance, reliability, and cost efficiency of pipelines and queries; drive best practices in version control, code review, and CI/CD for analytics engineering
Requirements:
- 4+ years of experience in analytics engineering, data engineering, or BI development building production data models in a modern cloud data stack
- Advanced SQL proficiency (e.g., complex joins, window functions, query optimization) with a track record of performance tuning in Snowflake, BigQuery, or Redshift
- 2+ years implementing and maintaining dbt projects (models, tests, macros, documentation) in production with Git-based workflows
- Hands-on experience orchestrating ELT/ETL pipelines with Airflow, Dagster, or similar, including scheduling, dependency management, and alerting
- Experience building semantic layers and BI models (e.g., Looker/LookML, Semantic Layer, or equivalent) to enable reliable self-serve analytics
- Demonstrated use of automated data quality testing and data observability (e.g., dbt tests, Great Expectations, or similar) and ownership of documentation and lineage
- Bachelor's degree in Computer Science, Engineering, Mathematics, Statistics, or a related field, or equivalent practical experience
- Proven success partnering cross-functionally with Product, Marketing, and Engineering to translate ambiguous requirements into scalable datasets and clear deliverables
- Experience supporting growth or marketing teams, including incentives, promotions, lifecycle/CRM, attribution, or incrementality measurement
- Proficiency in Python for data transformation, orchestration tasks, or analytics utilities within the ELT workflow
- Experience with experimentation data (e.g., assignment, guardrails, lift) and building datasets to support A/B tests and causal inference workflows
- Familiarity with data governance and cataloging (e.g., DataHub, Amundsen) and warehouse cost/performance optimization best practices
- Background in consumer technology, marketplaces, or e-commerce operating at scale with complex event and transactional data