Design, implement, and maintain scalable data engineering solutions following Agile methodology.
Develop and manage reliable data integrations with external vendors and organizations (including API integrations).
Collaborate closely with Data Analysts, Data Scientists, DBAs, and cross-functional teams to understand requirements and deliver high-quality data solutions.
Take ownership of assigned components and deliverables within larger technical projects.
Contribute to improving the quality, security, efficiency, and scalability of data pipelines and infrastructure.
Requirements
Bachelor of Science degree in Computer Science or equivalent.
3
5 years of experience building and maintaining ETL/ELT pipelines.
2+ years of Python development experience, including building and maintaining production-grade data services (e.g., FastAPI or similar frameworks).
Strong SQL skills, including performance tuning and query optimization.
Hands-on experience with dbt, including developing models and managing incremental tables.
Practical experience with Snowflake, including data modeling and query optimization.
Experience with AWS data services such as S3, Redshift, Lambda, or similar tools for building and operating data workflows.
Tech Stack
Amazon Redshift
AWS
ETL
Python
SQL
Benefits
Medical, Dental, Vision
multiple packages available based on your individualized needs
Life/AD&D Insurance
basic coverage at 100% company paid, additional supplemental available