Develop and maintain ETL/ELT pipelines using Azure Data Factory and/or Fabric Data Factory to ingest data from databases, files, APIs, and SaaS
Build and optimize curated data layers in Microsoft Fabric (OneLake, Lakehouse, Warehouse) and Snowflake following modern patterns (e.g., Bronze/Silver/Gold)
Write, tune, and troubleshoot advanced SQL for transformations, validations, and analytics workloads
Design and maintain dimensional data models (facts/dimensions) to support BI and self‑service analytics (Power BI or similar)
Monitor and improve data quality, reliability, and performance, including incremental loads and cost optimization
Apply security and governance best practices (RBAC, controlled data access, basic lineage/metadata documentation)
Collaborate in an Agile environment, using DevOps, Git and work‑tracking tools to deliver well‑documented, production‑ready solutions
Requirements
Bachelor’s degree in Computer Engineering or Computer Science or Equivalent Work Experience
3–6 years of experience in data development or data engineering
Strong SQL skills with experience creating and maintaining databases objects (Tables, Views, Stored Procedures, Indexes & Functions), optimizing queries and data transformations
Hands-on experience with Azure data services (ADF, Azure SQL, ADLS, Synapse and/or Fabric)
Working experience with Microsoft Fabric (Lakehouse, Warehouse, OneLake, or notebooks)
Hands-on experience with Snowflake as a data warehouse
Solid understanding of data warehousing concepts, ETL/ELT, and dimensional modeling
Experience using Git and following basic CI/CD or structured deployment practices