Home
Jobs
Saved
Resumes
Intermediate Data Developer – Data Engineering at Plusgrade | JobVerse
JobVerse
Home
Jobs
Recruiters
Companies
Pricing
Blog
Jobs
/
Intermediate Data Developer – Data Engineering
Plusgrade
Website
LinkedIn
Intermediate Data Developer – Data Engineering
Canada
Full Time
2 hours ago
No H1B
Apply Now
Key skills
Airflow
Amazon Redshift
AWS
Cloud
ETL
PySpark
Python
SQL
ELT
Data Engineering
Data Warehousing
Analytics
Snowflake
Redshift
Lambda
S3
Glue
Git
Version Control
Performance Optimization
CI/CD
Collaboration
About this role
Role Overview
Design, develop, and maintain robust ETL/ELT pipelines to integrate data from multiple sources into a centralized cloud-based data platform
Build scalable data ingestion, transformation, and enrichment processes using Python, SQL, and PySpark
Optimize data workflows for performance, scalability, and cost efficiency in the cloud
Implement data quality and validation checks to ensure trust in reporting, analytics, and data-driven products
Collaborate with cross-functional teams to translate business requirements into technical data solutions
Support large-scale transformations using distributed processing frameworks
Troubleshoot and resolve issues in data pipelines, ensuring reliability and uptime
Participate in code reviews and contribute to engineering standards and best practices
Document data processes, pipelines, and schemas to improve transparency and reusability
Stay current with modern data engineering tools, practices, and cloud technologies, with a passion for continual learning and knowledge sharing
Build with stakeholders in mind, not just raw pipelines.
Requirements
3+ years of experience in data engineering, data development, or data management
Strong hands-on experience with Snowflake and modern data warehouse concepts (data lakes, lakehouse, streaming)
Proficiency in Python and SQL for building and optimizing data pipelines
Hands-on experience with AWS services such as S3, Glue, Lambda, Redshift, and data platforms such as Snowflake
Experience with ETL/ELT, data modeling, and data warehousing concepts
Experience with orchestration tools (Airflow, Dagster)
Hands-on experience with PySpark and distributed data processing frameworks (e.g., AWS EMR)
Knowledge of pipeline performance optimization and debugging
Strong problem-solving, analytical, and collaboration skills
Experience with version control (Git) and CI/CD workflows
Tech Stack
Airflow
Amazon Redshift
AWS
Cloud
ETL
PySpark
Python
SQL
Benefits
RRSP matching
Comprehensive health plans
Flexible paid time off
Travel experience perk
Annual wellness perk
Commuter perk
Tenure-based Work From Anywhere program
Parental leave top-up
Adventure Pass
Learning allowance
Apply Now
Home
Jobs
Saved
Resumes