Gallagher is a global community committed to empowering businesses and individuals to thrive. They are seeking an Associate Data Engineer to develop and maintain enterprise data integrations and pipelines, collaborate with various teams, and enhance data solutions for analytics and product initiatives.
Responsibilities:
- Develop integration workflows, to make sure the solutions are built accurately and according to spec
- Develop and maintain requirements, design documentation, and test plans
- Seek out, design, and implement internal process improvements: automating manual processes, optimizing data delivery
- Coordinate with BI Engineers, Financial Applications, and Oracle HR teams around data management, reconciliation, test data setup, etc
- Develop and maintain data pipelines to ingest data from a wide variety of data sources (structured and unstructured) into Snowflake
- Construct and maintain enterprise-level integrations using the Snowflake platform, Azure Synapse, Azure SQL, and SQL Server
- Create data tools for data analytics and data science team members that assist them in building and optimizing our product into an innovative industry leader
- Design analytics tools that utilize the data pipeline to deliver actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
- Assist with building Semantic Views, and Agents using Cortex AI Functions in Snowflake to support deep learning and interactive user queries
- Troubleshoot issues, helping to drive root-cause analysis, and work with infrastructure teams to resolve incidents and arrive at a permanent resolution
- Partner with data and analytics teams to strive for greater functionality in our data systems
- Coordination for development and support with globally located resources
- Understand the layout and working of existing integrations that send and receive data between Oracle, Concur, JDE, Corporate Data Platform, and other systems
Requirements:
- A relevant technical BS Degree with 2+ years of experience or Master's Degree in Information Technology, Data Science, Computer Engineering or related
- 1+ years of writing SQL queries against any RDBMS with query optimization
- 1+ years of experience leveraging technologies such as Snowflake, Azure Data Factory, and SQL Server
- Strong experience with Python, Java, and XML
- Familiarity with structuring a Data Lake for reliability, security, and performance
- Familiarity with Medallion architecture, AI frameworks, and AI data readiness, Machine learning algorithms
- Skills to read and write effective, modular, dynamic, parameterized, and robust code, and establish and follow already established code standards
- Strong analytical, problem-solving, and troubleshooting abilities
- Good understanding of unit testing, software change management, and software release management
- Knowledge of Dev-Ops, ML-Ops and AI-Ops processes
- Experience performing root cause analysis on data and processes to answer specific business questions, and identify opportunities for improvement
- Experience working within an agile team is preferred
- Excellent communication skills