BravoTech is a leader in IT staffing and staff augmentation services, and they are seeking an Azure Data Engineer II for a preferred client. The role involves designing and developing scalable data pipelines, collaborating with cross-functional teams to deliver reliable data solutions, and optimizing SQL queries for performance and efficiency.
Responsibilities:
- Candidate must have experience writing SQL Code from Scratch
- Must have minimum 2 years of Python Scripting experience
- Building Data Pipelines using Azure Data Factory
- Design and development of scalable data pipelines. In this role, you will collaborate with cross-functional teams to deliver reliable, high-quality data solutions that directly support patients, providers, and the business
- Design, build, and maintain scalable data pipelines (ETL/ELT) to support analytics, reporting, and operational needs
- Collaborate with senior Data Engineers, BI developers, and analysts to develop robust, efficient, and well-documented data solutions
- Contribute to architecture discussions and data flow design, ensuring best practices for performance, quality, and scalability
- Optimize SQL queries, data models, and ETL processes for performance and efficiency
- Implement testing, monitoring, and error handling for production-grade pipelines
- Ensure data quality, lineage, and documentation across subject areas
- Partner with business stakeholders to translate requirements into technical solutions
- Work with cloud-based data sources, including structured, semi-structured, and unstructured data
- Strong SQL skills, including query optimization and data modeling (relational and/or dimensional)
- Hands-on experience with ETL/ELT design and development
- Proficiency in at least one programming language (Python, C#, Java, or Spark)
- Familiarity with modern data warehouse technologies (Snowflake, Azure Synapse, AWS Redshift, etc.)
- Demonstrated ability to troubleshoot SQL Server, Windows Server, and related environments
- Familiarity with CI/CD practices for data pipelines (e.g., GitHub Actions, Azure DevOps)
Requirements:
- Azure Data Engineer
- SQL Coding
- ETL
- Python
- Azure Synapse
- GitHub Actions
- ADF or Azure Data Factory
- SQL Server
- Experience writing SQL Code from Scratch
- Minimum 2 years of Python Scripting experience
- Building Data Pipelines using Azure Data Factory
- Design and development of scalable data pipelines
- Collaborate with cross-functional teams to deliver reliable, high-quality data solutions
- Design, build, and maintain scalable data pipelines (ETL/ELT)
- Collaborate with senior Data Engineers, BI developers, and analysts
- Contribute to architecture discussions and data flow design
- Optimize SQL queries, data models, and ETL processes for performance and efficiency
- Implement testing, monitoring, and error handling for production-grade pipelines
- Ensure data quality, lineage, and documentation across subject areas
- Partner with business stakeholders to translate requirements into technical solutions
- Work with cloud-based data sources, including structured, semi-structured, and unstructured data
- Strong SQL skills, including query optimization and data modeling (relational and/or dimensional)
- Hands-on experience with ETL/ELT design and development
- Proficiency in at least one programming language (Python, C#, Java, or Spark)
- Familiarity with modern data warehouse technologies (Snowflake, Azure Synapse, AWS Redshift, etc.)
- Demonstrated ability to troubleshoot SQL Server, Windows Server, and related environments
- Familiarity with CI/CD practices for data pipelines (e.g., GitHub Actions, Azure DevOps)
- Must have Degree
- 6+ years of experience as an Azure Data Engineer
- Open to US Citizen or GC (No sponsorship options)
- Clean criminal background and ability to pass drug screen
- MS -Fabric (Huge PLUS)
- Healthcare (HUGE PLUS)