Red Ventures is a global portfolio of high-growth companies, and they are seeking a talented Data Engineering Manager to join their Home and Utility group. In this role, you will lead a team of Data Engineers to design and build data pipelines, ensuring data quality and supporting business analytics and machine learning efforts.
Responsibilities:
- Lead a team of Data Engineers with varied experience level to design and build data pipelines from various data sources to a target data warehouse using real-time and batch data load strategies utilizing cutting edge cloud technologies
- Serve as a people leader by directly mentoring and coaching engineers, supporting formal performance review cycles, and providing regular, actionable feedback to drive individual growth and team success
- Work with a cross functional team of business stakeholders, engineering leaders, data analysts and data scientists to formulate both business and technical requirements
- Partner with business and data leaders to understand broader business strategies and define a roadmap outlining how the team will support and enable key initiatives
- Establish and enforce standards for documenting data platform designs, including logical and physical data models, metadata, ETL specifications, and end-to-end integration workflows
- Design processes for ensuring data quality and integrity while providing timely responses to ad hoc requests
- Lead and execute proof-of-concepts where appropriate to assess, validate, and improve technical processes and approaches
Requirements:
- Excellent communication skills, with the ability to clearly articulate complex technical concepts to both technical and non-technical stakeholders
- 3+ years of experience managing a mid size team
- A technical leader who can guide and coach a team of data engineers
- 5+ years of experience in the big data space
- Experience in translating business requirements into technical data solutions on a large scale
- Experience working on Spark (SparkSQL / Data Frames / Dataset API) using Scala/Python to build and maintain complex ETL pipelines
- Experience with one of the major data warehousing solutions (we use Databricks but welcome candidates with experience in Snowflake, Oracle, Terradata or Redshift)
- 4+ years of experience working on Cloud technologies (we use AWS but welcome candidates with experience in Azure or GCP)
- Experience with GitHub and CI/CD processes
- Experience driving operational excellence for large-scale data platforms, including reliability, scalability, performance and cost optimization
- Experience with Compute technologies like EMR and Databricks
- Strong analytical and interpersonal skills
- Enthusiastic, highly motivated and ability to learn quickly
- Able to work through ambiguity in a fast-paced, dynamically changing business environment