Lead the design, implementation, and expansion of the company's data platform, including ingestion pipelines, data warehouse/lakehouse architecture, transformation layers, and data serving infrastructure.
Define and champion engineering standards, best practices, and architectural patterns for data infrastructure across the organization.
Collaborate cross-functionally to understand data needs and translate them into scalable, reliable platform capabilities.
Drive technical decisions on tooling, frameworks, and vendor selection for the data stack (e.g., orchestration, storage, transformation, observability).
Mentor and level up engineers on the team, providing technical guidance and code review with a focus on quality and long-term maintainability.
Proactively identify scalability, reliability, and performance bottlenecks and lead efforts to address them.
Partner with stakeholders across engineering, data science, and business teams to ensure the platform supports current and future analytical and operational needs.
Contribute to organizational planning and hiring, helping grow the data engineering function.
Requirements
Extensive experience in data engineering, platform engineering, or a related discipline, with a track record of building and scaling data infrastructure across multiple products or business domains.
Demonstrated experience designing and implementing end-to-end data platforms, including batch and/or streaming pipelines, data warehouse or lakehouse architectures, and transformation frameworks.
Deep expertise in at least one major cloud data ecosystem (AWS, GCP, or Azure) and familiarity with modern data stack tooling (e.g., dbt, Spark, Airflow/Dagster, Snowflake/BigQuery/Databricks, Kafka).
Strong software engineering fundamentals with proficiency in Python and/or Scala/Java for data systems.
Ability to operate at both the strategic architectural level and the hands-on implementation level.
Strong communication skills with the ability to influence technical direction and align cross-functional stakeholders.
Nice to Have: Experience building data platforms in a greenfield or high-growth environment.
Familiarity with data mesh, data fabric, or federated data architecture patterns.
Experience with data governance, data cataloging, and metadata management frameworks.
Prior work in climate tech, sustainability, or a mission-driven organization.
Tech Stack
Airflow
AWS
Azure
BigQuery
Cloud
Google Cloud Platform
Java
Kafka
Python
Scala
Spark
Benefits
Comprehensive nationwide medical, dental, and vision coverage.
Time off as needed: Flexible vacation policy and ten company-wide holidays, plus annual winter break between Christmas and New Year's
16 weeks of fully paid parental and family leave with no tenure requirement
Remote-friendly work culture with annual company-wide retreats
Reimbursement for your work-from-home setup and monthly work-from-home stipend