Synodus is a company seeking a Data Engineer to design, develop, and maintain data pipelines and data warehousing solutions on Azure. The role involves optimizing ETL processes, integrating data from various sources, and collaborating with cross-functional teams to ensure data quality and security.
Responsibilities:
- Design, develop, and maintain data pipelines using Azure Data Factory
- Build and optimize data models and data warehousing solutions on Azure SQL
- Develop and maintain ETL/ELT processes to ensure efficient data flow
- Integrate data from various sources via API Integration
- Work with Azure Data Lake Storage Gen2 for large-scale data storage and processing
- Develop dashboards and reports using Power BI
- Write and optimize complex SQL queries and Stored Procedures
- Automate workflows and integrations using Azure Logic Apps
- Implement data models and analytics solutions using Azure Analysis Services
- Manage source code and version control using Azure Repos
- Set up and maintain CI/CD pipelines using: Azure Pipelines, Azure Releases
- Ensure data quality, integrity, and security across all data systems
- Collaborate with cross-functional teams including Business Analysts, Data Analysts, and Developers
Requirements:
- Strong experience in Python development
- Hands-on experience with: Azure Data Factory, Azure SQL, Azure Data Lake Storage Gen2
- Strong expertise in SQL and Stored Procedures
- Experience with API Integration
- Proficiency in Power BI for data visualization and reporting
- Experience with: Azure Logic Apps, Azure Analysis Services
- Familiarity with Azure DevOps tools: Azure Repos, Azure Pipelines, Azure Releases
- Solid understanding of data warehousing and ETL/ELT concepts
- Strong problem-solving and analytical skills
- Ability to communicate effectively in English (B2 level or above)