BBI is seeking a Data Engineer to develop, deploy, and maintain robust data pipelines and ETL processes. The role involves designing data integration workflows, monitoring pipeline performance, and ensuring data quality and compliance with organizational standards.
Responsibilities:
- Develop, deploy, and maintain robust data pipelines and ETL processes using Databricks and PySpark to ingest, process, and transform enterprise grade large-scale datasets
- Design and implement data integration workflows and orchestration using Azure Data Factory to automate data movement across diverse sources and destinations
- Monitor, troubleshoot, and enhance data pipeline performance (optimize for cost as well), ensuring data quality, security, and compliance with organizational standards
- Integrate data solutions with Client’s DQ framework
- Experience with develop and execute unit test cases/SIT cases and test plans in ADO or a testing tool
- Support with resolution of UAT defects
- Preferred: Develop reports and dashboards using PowerBI
Requirements:
- Develop, deploy, and maintain robust data pipelines and ETL processes using Databricks and PySpark to ingest, process, and transform enterprise grade large-scale datasets
- Design and implement data integration workflows and orchestration using Azure Data Factory to automate data movement across diverse sources and destinations
- Monitor, troubleshoot, and enhance data pipeline performance (optimize for cost as well), ensuring data quality, security, and compliance with organizational standards
- Integrate data solutions with Client's DQ framework
- Experience with develop and execute unit test cases/SIT cases and test plans in ADO or a testing tool
- Support with resolution of UAT defects
- Develop reports and dashboards using PowerBI