Assist in the design, build, and deployment of cloud data platforms, including data lakehouses and warehouses
Support the implementation of ETL/ELT data pipelines and scheduled jobs using tools like Spark, Databricks notebooks, and a managed service like Fivetran
Work on leveraging managed and serverless cloud offerings for application solutions and data pipelines
Assist in adhering to basic data security and governance policies
Assist in generating insights and reports using BI tools such as Tableau and Power BI
Requirements
2–4 years of foundational experience in data engineering and related roles/internships
Proficiency in Python and SQL, with familiarity in Bash scripting and concepts like dbt
Familiarity with ETL/ELT concepts and best practices
Familiarity with the Medallion Architecture (Bronze, Silver, and Gold layers)
Hands‑on know‑how in 2–3 major cloud and data platforms (e.g., Azure, AWS, GCP, Snowflake, Databricks, and Fabric)
Tech Stack
AWS
Azure
Cloud
ETL
Google Cloud Platform
Python
Spark
SQL
Tableau
Benefits
Fully remote
Flexible schedule
Paid parental and bereavement leave
Worldwide recognized clients to build skills for an excellent resume