Arkhya Tech. Inc. is seeking a Data Engineer with experience in Microsoft Fabric. The role involves designing and maintaining data pipelines, integrating diverse data sources, and ensuring data quality while collaborating with various teams to deliver analytical solutions.
Responsibilities:
- Design, develop, and maintain data pipelines using Microsoft Fabric tools (Dataflows Gen2, Pipelines, Lakehouses, Warehouses)
- Implement parameterized, metadata-driven data flows to support dynamic and reusable solutions across multiple Fabric workspaces
- Integrate data from diverse sources (structured and unstructured) into OneLake and Fabric Lakehouses, ensuring data quality and consistency
- Optimize data ingestion, transformation, and serving for performance, scalability, and cost efficiency
- Apply best practices for securing data by utilizing zero-trust and least-privilege principles across Azure and Fabric platforms
- Develop and maintain semantic models for Power BI and Fabric, enabling self-service analytics and reporting
- Collaborate with analytics and reporting teams to deliver dashboards, reports, and insights that meet business requirements
- Support entity and partner-level data isolation, including row-level security and data partitioning strategies
- Work closely with business stakeholders, data operations, and analytics teams to understand requirements and translate them into technical solutions
- Participate in Agile/Scrum ceremonies, contributing to sprint planning, refinement, and delivery
- Support ongoing projects such as entity data integration, universal enrollment, and dashboard development for programs
- Develop custom scripts and solutions using Python, Spark SQL, or Databricks to extend Fabric capabilities and automate data processes
- Implement CI/CD practices for data pipelines and artifacts, collaborating with DevOps and platform teams as needed
- Ensure data lineage, governance, and lifecycle management across Fabric and OneLake
- Apply best practices for data quality, validation, and auditing
- Support regulatory and compliance requirements (RBAC, Purview, data classification)
Requirements:
- 3+ years of experience in data engineering, preferably with Microsoft Fabric, Azure Data Factory, Synapse, or similar platforms
- Hands-on expertise with Microsoft Fabric components: Dataflows Gen2, Pipelines, Lakehouses, Warehouses, OneLake, Power BI
- Strong proficiency in SQL, Python, and/or Spark for data processing and transformation
- Experience with metadata-driven pipeline design and parameterization
- Familiarity with CI/CD for data engineering (Azure DevOps, GitHub Actions)
- Solid understanding of data modeling, ETL/ELT processes, and data warehousing concepts
- Experience working in Agile/Scrum environments
- Experience with Databricks, Spark SQL, or other big data platforms
- Knowledge of Azure Data Factory, Synapse Analytics, and integration with Microsoft Fabric
- Familiarity with row-level security, data partitioning, and partner-level data isolation
- Exposure to data governance tools (Purview, RBAC, auditing)
- Experience supporting reporting and analytics for programs