Kit is an email-first operating system for creators aiming to help them grow and monetize their audience. They are seeking a Lead Analytics Engineer to build a robust data foundation that empowers teams to make informed decisions without relying on the Data team for routine data interpretation.
Responsibilities:
- Complete onboarding in Notion and meet your teammates through Get-To-Know-You calls
- Get oriented in our core tools: dbt, Github, Redshift, Omni, Slack, and Linear
- Start reading existing documentation on our Reporting Hub architecture and canonical metric definitions
- Audit active Reporting Hub models across Finance, Marketing, Sales, Product Strategy, and Creator Lifecycle
- Map current churn logic implementation and identify inconsistencies in canonical metric definitions
- Assess Redshift performance strain areas and flag fragile model dependencies
- Publish a written architectural assessment memo with your initial findings and recommended priorities
- Serve as the data team’s voice in cross-functional discussions about metric definitions, attribution logic, and analytical rigor across Finance, Product, and Marketing
- Propose and begin executing a 6–12 month modernization roadmap for our modeling layer
- Refactor the highest-risk models across all business verticals
- Clarify attribution model contracts and improve cross-functional documentation standards
- Measurably reduce warehouse inefficiencies through distribution and sort key optimization, not infrastructure scaling
- Improve Segment event modeling structure and align event design with reporting needs in partnership with Product and Engineering
- Drive company-wide adoption of canonical metrics by working directly with functional leads to replace ad hoc definitions with documented, auditable standards
- Complete the Reporting Hub foundation across business verticals with canonical metric consistency enforced
- Enable true stakeholder self-serve, so teams can explore data without relying on the Data team for routine questions
- Reduce model rework caused by upstream ambiguity through upstream design patterns and documentation
- Raise the overall modeling sophistication of the Data team through mentorship, standards, and shared tooling
- Leadership makes resource allocation and growth decisions using a shared metric layer they trust without qualification
- Finance closes faster because revenue and subscription models are reliable and consistent
- Product and the broader company make decisions with cohort and performance data they no longer need to manually verify, allowing the data team to ship faster with less rework
Requirements:
- Deep SQL mastery. You write queries that are readable, performant, and designed for maintainability. You understand relational semantics deeply, including join selection, cardinality management, and predicate placement
- Redshift expertise. You know how sort keys, distribution styles, concurrency, vacuum, and analyze behavior work, and how to design around their constraints
- dbt fluency. You build modular, well-documented, refactorable dbt code. You have opinions about macro design, test strategy, and what 'good' looks like in a dbt project at scale
- Business modeling strength. You can translate revenue recognition logic, attribution frameworks, funnel stages, and lifecycle metrics into reliable, auditable models that stakeholders trust
- Written communication. You write clear architectural memos, modeling documentation, and decision rationale. You communicate trade-offs, not just conclusions
- Analytical judgment. You have experience navigating contested metric definitions, challenging dashboard results, or attribution debates while maintaining analytical rigor
- Stakeholder translation. You can explain complex metrics and modeling decisions to different stakeholders and communicate in terms they trust
- 7–10+ years working in analytics engineering, data engineering, or a closely related role, with at least a few years operating at a senior or staff level
- Designed or significantly refactored a warehouse modeling layer in a production environment
- Built or stewarded canonical metric definitions in a multi-stakeholder environment
- Worked in a SaaS company where revenue recognition, churn logic, or subscription modeling was a core part of the work
- Experience with product analytics modeling, Segment event architecture, identity resolution, or cohort/retention modeling is a strong plus
- Collaborated with infrastructure engineers to address performance challenges at the warehouse level
- As a lead level IC, you see your role as amplifying the effectiveness of the entire data team and driving company wide impact. You lead by example
- You think in systems. When you look at a modeling problem, you see the upstream causes and downstream consequences, not just the immediate fix
- You're patient with complexity. You don't oversimplify hard problems, and you don't ship a definition you can't defend
- You document as you go, not as an afterthought. You believe that work that's not written down didn't really happen
- You push back when necessary. You're comfortable disagreeing with stakeholders, explaining your reasoning, and holding the line on rigor when speed would create long-term problems
- You use AI tools actively. You're comfortable using AI to accelerate drafting, exploration, and deployment, while applying your own judgment to the outputs
- You operate with extreme autonomy. At a Staff level, you don't wait for a detailed brief. You identify what needs to be done, communicate your plan, and move