Responsible for designing, implementing, and maintaining scalable and efficient data infrastructure and pipelines
Extract data from complex and diverse data sources, and builds and manages data repositories
Designs and develops data pipelines to ingest, transform, and load data from various sources into the data ecosystem
Collaborates with Analytics Engineers and other cross-functional teams to ensure the data infrastructure meets the organization’s needs and supports data-driven initiatives and enterprise decision-making
Writes complex SQL queries to move data from various source systems into the data environments
Creates reconciliation processes to ensure data is loaded into the ODS completely and accurately
Creates error handling and logging processes in the data environments
Collaboration with Analytics Engineers and Data Owners to understand their data requirements and identify and prioritize opportunities to improve efficiencies and processes through integration
Designs and implements integration flows and enhancements, including APIs and/or file-based integrations
Monitors performance activities and troubleshoots, resolves, and reports integration issues to impacted teams and stakeholders
Partners with the Analytics Engineers to ensure that the data is properly loaded into the data environments
Acts as a mentor to less experienced Data Engineers
Makes necessary optimizations to ensure reliability and efficiency
Identifies and implements data quality and data governance processes to ensure data integrity and compliance with regulatory requirements
Maintains a customer-first mentality in collaboration with stakeholders, leaders, and fellow engineers
Requirements
Minimum 7 years of related experience in engineering operational data stores
Minimum 7 experience creating and modifying SQL Server Integration Services (SSIS) packages or Azure Data Pipelines
Prior experience with database structures, data normalization, de-normalization, entity relationships, ODS concepts, data loading issues, and best practices
Prior experience working in Microsoft Fabric
Familiarity with data warehousing concepts, dimensional modeling, and columnar databases
Prior experience with programming skills in languages such as Python, Java, or Scala, and experience with SQL
Prior experience with cloud-based data platforms, such as AWS, Azure, or Google Cloud
Understanding of data modeling, database design principles, and ETL/ELT processes
Prior experience with analytical and critical thinking and problem-solving to manipulate and analyze data
Detail oriented and focused
Analytical end-to-end thinking and problem-solving skills
Ability to work independently on concurrent tasks and adhere to deadlines
Experience working in an Agile environment and easily adapt to changing priorities
Strong interpersonal, oral, and written communication skills
Highly proficient with Microsoft Word, Excel, PowerPoint and Outlook
Intermediate skills in MS Excel, including the ability to utilize functions such as V-Lookup, H-Lookup, Concatenating formulas and Pivot Tables.