Amplify is a pioneer in K–12 education, leading in next-generation curriculum and assessment. The Staff Analytics Engineer will work with data scientists and engineers to build analytical models and data solutions, helping school administrators and principals understand educational data effectively.
Responsibilities:
- Respecting privacy and ensuring security while offering valuable insights
- Making inquisitive choices in tech stack, database design, masking policies, and encryption
- Building analytical models to fuel reporting we offer to administrators
- Architecting data warehouse schemas and SQL transforms with just the right CTEs, window functions, and pivots
- Creating data solutions using tools like Snowflake, Airflow, DBT, SQL, Python, Cube.dev
- Immersing yourself in agile rituals and leveraging our infrastructure
- Leading collaboration, pull request-ing, CI/CD processes, and mentoring on a cross-functional team
- Participating in cross-team share-outs, brownbags, and workshop series
- Becoming an expert in the data models and standards within Amplify to deliver quality and consistent solutions
- Build well-tested and documented ELT data pipelines for full and incremental dbt models to funnel into Cube Sematic Layer models
- Engineer novel datasets that express a student's progress and performance through an adaptive learning experience that allows for flexible comparison across students and deep analysis of individual students
- Craft slowly changing dimensional models that take into account the nuances of K-12 education such as School Year changes and students moving schools or classes
- Improving our pipeline deployments and tests
Requirements:
- BS in Computer Science, Data Science, or equivalent experience
- 8+ years of professional software development or data engineering experience
- 5+ years experience in computer, data, and analytics engineering
- Expertise in computer, data, and analytics engineering
- Expertise in SQL and its use in code-based ETL frameworks, preferably dbt, focusing on reuse and efficiency
- Expertise in ETL/ELT pipelines, analytical data modeling, aggregations, and metrics
- Expertise in dbt and git preferably with automation skills
- Expertise in analytical modeling architectures, including the Kimball design
- Strong communication skills in writing and conversation, including writing engineering training documentation
- Fluency in a development language such as Python
- Familiarity with metadata management tools such as Atlan
- 3+ years Experience building dashboards, reports, and models in business intelligence tools such as Tableau or Looker
- Expertise with tools we use every day: Storage: Snowflake, AWS Storage Services (S3, RDS, Glacier, DynamoDB) and Postgres
- ETL/ELT: Airflow, dbt, Matillion, Fivetran
- BI: Cube.dev, Looker, Tableau
- Experience with tools we don't use, but should
- Proven passion and talent for teaching fellow engineers and non-engineers
- Proven passion for building and learning: open source contributions, pet projects, self-education, Stack Overflow
- Proven technical leadership in project delivery
- Experience in education or ed-tech