Empower is a company focused on transforming financial lives and fostering a flexible work environment. The Director of Data Engineering will lead a team to transform data into insights for business stakeholders, shaping data strategy and overseeing the development of data engineering solutions.
Responsibilities:
- Lead a team of data engineers transforming data from disparate systems to enable insights and analytics for business stakeholders
- Create technical roadmaps and recommend strategies for data pipelines and integration
- Leverage cloud-based infrastructure to implement scalable, resilient, and efficient data engineering solutions
- Collaborate with data analysts, data scientists, database administrators, cross-functional teams, and business stakeholders to solve problems
- Influence architectural decisions and design patterns across the data platform
- Provide technical leadership across the software development lifecycle, from design to deployment, including hands-on contribution
- Develop project plans, facilitate prioritization timelines, allocate resources, and take ownership of assigned technical projects in a fast-paced environment
- Perform code reviews and ensure data engineers follow best-practice coding standards
- Define and validate test cases to ensure data quality, reliability, and a high level of confidence
- Continuously improve quality, efficiency, and scalability of data pipelines, reducing gaps and inconsistencies
Requirements:
- Bachelor of Science in Computer Science or equivalent
- 7+ years of post-degree professional experience
- 4+ years building and maintaining ETL pipelines in a data warehouse environment
- 5+ years of Python development experience
- Experience hiring and leading a team of 3+ data engineers, including supervision, goal-setting, and supporting professional growth
- Strong communication and interpersonal skills to initiate and drive projects
- Experience with AWS integrations such as Kinesis, Firehose, Aurora Unload, Redshift, Spectrum, Elastic Mapreduce, SageMaker, and Lambda
- Experience provisioning data sets for analytics tools such as Tableau, Quicksight, or similar, and knowledge of analytic tools such as R, Tableau, Plotly, and Python Pandas
- Expert SQL skills (including performance tuning, indexes, and materialized views) and proficiency designing and executing NoSQL databases to optimize big data storage and retrieval
- Experience with API integrations with external vendors to push/pull data between organizations, and familiarity with data orchestration pipelines using Argo or Airflow
- Redshift experience
- Financial services industry experience