Build and deploy the data pipelines that power Companys machine learning platform.
Develop warehouse architectures that integrate data from diverse sources.
Design analytics solutions for both product development and internal collaborators.
Requirements
A minimum of 1 years experience in data engineering, analytics engineering, or software engineering with an emphasis on large scale data management and processing systems.
Proficiency in Python and advanced SQL.
Advanced experience with ETL pipelines, data modeling and managing data warehouses (Snowflake, BigQuery, Redshift, etc).
Experience with SQL RDBMS such as MySQL, Postgres, Oracle, SQL Server, etc., as well as NoSQL systems such as MongoDB is required.
Experience with tools including Meltano, dbt, Airflow, and Superset is required.
Experience with Docker, Git and AWS (EC2, ECS, S3, Step Functions, and Athena).
Analytics and Business Intelligence knowledge is highly preferred.
Experience working with common SaaS APIs (such as CRMs) is preferred.