Design and implement scalable data architectures on Azure, ensuring data is integrated and made available efficiently to the company’s various departments.
Develop, manage, and optimize ETL pipelines, monitor databases, ensure information security and integrity, and automate processes to increase operational efficiency.
Work in partnership with data scientists and analysts to ensure data is clean, structured, and ready for analysis.
Implement data governance policies, ensure compliance with regulations, maintain systems in proper working order, and perform continuous improvements as needed.
Requirements
Strong knowledge of core Azure data services, including Azure SQL Database, Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure Data Lake.
Proficiency in languages such as SQL, Python, and Scala, and hands-on experience with ETL tools, especially Azure Data Factory.
Expertise in data modeling, both relational and non-relational, as well as knowledge of Big Data technologies such as Hadoop, Spark, and Kafka.
Experience with data security practices, including encryption and access controls.
Familiarity with DevOps practices, using tools like Azure DevOps for pipeline automation (CI/CD).
Strong problem-solving skills, good communication with technical and non-technical audiences, collaborative teamwork, and the ability to manage multiple priorities simultaneously.