Collaborate with team members to define requirements and translate them into scalable data models and pipelines.
Develop and maintain ELT pipelines, ensuring data reliability and scalability for business reporting and analytics use cases.
Work with raw data and transform it into structured, usable formats for analytics and reporting.
Build and optimize SQL-based data models using dbt and other ETL tools.
Identify and implement improvements in data delivery, processing performance, and system efficiency.
Contribute to the team’s technical vision and bring innovative solutions to enhance data systems.
Partner with data engineering teams to drive standardization and consistency of product data modeling, schemas and insights reporting across the entire company
Requirements
Background in a quantitative field, such as analytics, engineering, applied mathematics, statistics.
A degree (BS/BA or equivalent experience) is preferred.
Data engineering/analytics engineering experience building, maintaining and working with data pipelines & ETL processes in big data environments.
Extensive experience with SQL, ideally in the context of data modeling and analysis.
Hands-on production experience with dbt, Airflow and Snowflake.
Experience with cloud columnar databases (Google BigQuery, Amazon Redshift, Snowflake), query authoring (SQL) as well as working familiarity with a variety of databases.
Proven experience in performance testing, capacity planning, and cost optimization for large-scale, complex data pipelines and systems.
Experience with Tableau, Looker, or similar data visualization tools
Excellent communication and collaboration skills.
Boundless intellectual curiosity
you are self-motivated and unafraid to ask questions in order to learn and grow
Experience at fast-paced and scaled B2B Tech companies a plus