UKG is a Workforce Operating Platform that focuses on enhancing workforce understanding. They are seeking a Lead Data Engineer to design, build, and lead the delivery of scalable data solutions on their Cloud Data Platform, while mentoring a team and ensuring high-quality data solutions that align with business needs.
Responsibilities:
- Design, develop, and maintain end-to-end data pipelines and transformations using Azure (ADF, Databricks, ADLS Gen2, Synapse) and GCP (Composer, Data Fusion, Dataform, DataProc, BigQuery, GCS)
- Implement Fact and Dimension models, CDC patterns, and cloud-native ELT pipelines following established architectural standards
- Optimize data pipelines for performance, scalability, and cost efficiency
- Serve as the technical lead for one or more data domains or initiatives
- Lead design discussions, perform code reviews, and ensure adherence to engineering standards and best practices
- Mentor and guide Senior and Mid-level Data Engineers through hands-on coaching and technical feedback
- Act as the first point of escalation for complex technical issues within owned initiatives
- Partner with Data Architects, Product Owners, Analysts, and Platform teams to translate business requirements into effective data solutions
- Contribute to data modeling decisions to ensure alignment with analytics and reporting use cases
- Support cross-team initiatives by implementing shared frameworks and reusable components defined at the platform level
- Ensure reliability and maintainability of data pipelines through monitoring, alerting, and automated testing
- Implement and maintain CI/CD pipelines using Azure DevOps and/or GitHub Actions
- Create and maintain technical documentation for pipelines, models, and processes
- Support data quality, governance, and metadata standards within owned solutions
Requirements:
- 7–9+ years of experience in data engineering and data warehousing, including cloud-based solutions
- Proven experience delivering complex data pipelines and warehouse solutions in Azure and/or GCP
- Strong hands-on experience with Python, SQL, Spark, and distributed data processing
- Azure: Data Factory, Databricks, Synapse, ADLS Gen2, Azure DevOps
- GCP: Composer, Data Fusion, Dataform, DataProc, BigQuery, GCS, GitHub
- Solid understanding of Dimensional Modeling, ELT, CDC, and modern data lakehouse concepts
- Demonstrated ability to lead technical delivery and influence design decisions within a team
- Experience mentoring engineers through code reviews, pairing, and design guidance
- Strong communication skills with the ability to explain technical concepts to non-technical stakeholders
- Experience integrating data from SaaS platforms (Salesforce, D365, Qualtrics, Pendo, etc.)
- Familiarity with DataOps practices, orchestration, and monitoring tools
- Exposure to data quality and governance concepts
- Cloud certification (Azure or GCP) is a plus