Kit is an email-first operating system for creators who mean business, helping them grow and monetize their audience. The Staff Analytics Engineer will build the data foundation that allows all teams at Kit to operate from a single source of truth, enabling confident data exploration across the company.
Responsibilities:
- Complete onboarding in Notion and meet your teammates through Get-To-Know-You calls. Get oriented in our core tools: dbt, Github, Redshift, Omni, Slack, and Linear. Start reading existing documentation on our Reporting Hub architecture and canonical metric definitions
- Audit active Reporting Hub models across Finance, Marketing, Sales, Product Strategy, and Creator Lifecycle. Map current churn logic implementation and identify inconsistencies in canonical metric definitions. Assess Redshift performance strain areas and flag fragile model dependencies. Publish a written architectural assessment memo with your initial findings and recommended priorities. Serve as the data team’s voice in cross-functional discussions about metric definitions, attribution logic, and analytical rigor across Finance, Product, and Marketing
- Propose and begin executing a 6–12 month modernization roadmap for our modeling layer. Refactor the highest-risk models across all business verticals Clarify attribution model contracts and improve cross-functional documentation standards. Measurably reduce warehouse inefficiencies through distribution and sort key optimization, not infrastructure scaling. Improve Segment event modeling structure and align event design with reporting needs in partnership with Product and Engineering. Drive company-wide adoption of canonical metrics by working directly with functional leads to replace ad hoc definitions with documented, auditable standards
- Complete the Reporting Hub foundation across business verticals with canonical metric consistency enforced. Enable true stakeholder self-serve, so teams can explore data without relying on the Data team for routine questions. Reduce model rework caused by upstream ambiguity through upstream design patterns and documentation. Raise the overall modeling sophistication of the Data team through mentorship, standards, and shared tooling. Leadership makes resource allocation and growth decisions using a shared metric layer they trust without qualification. Finance closes faster because revenue and subscription models are reliable and consistent. Product and the broader company make decisions with cohort and performance data they no longer need to manually verify, allowing the data team to ship faster with less rework
Requirements:
- Deep SQL mastery. You write queries that are readable, performant, and designed for maintainability. You understand relational semantics deeply, including join selection, cardinality management, and predicate placement
- Redshift expertise. You know how sort keys, distribution styles, concurrency, vacuum, and analyze behavior work, and how to design around their constraints
- dbt fluency. You build modular, well-documented, refactorable dbt code. You have opinions about macro design, test strategy, and what 'good' looks like in a dbt project at scale
- Business modeling strength. You can translate revenue recognition logic, attribution frameworks, funnel stages, and lifecycle metrics into reliable, auditable models that stakeholders trust
- Written communication. You write clear architectural memos, modeling documentation, and decision rationale. You communicate trade-offs, not just conclusions
- Analytical judgment. You have experience navigating contested metric definitions, challenging dashboard results, or attribution debates while maintaining analytical rigor
- Stakeholder translation. You can explain complex metrics and modeling decisions to different stakeholders and communicate in terms they trust
- 7–10+ years working in analytics engineering, data engineering, or a closely related role, with at least a few years operating at a senior or staff level
- Designed or significantly refactored a warehouse modeling layer in a production environment
- Built or stewarded canonical metric definitions in a multi-stakeholder environment
- Worked in a SaaS company where revenue recognition, churn logic, or subscription modeling was a core part of the work
- Collaborated with infrastructure engineers to address performance challenges at the warehouse level
- Experience with product analytics modeling, Segment event architecture, identity resolution, or cohort/retention modeling is a strong plus