Consumer Edge is a company focused on redefining consumer data analytics, providing insights into consumer spending for investment firms and brands. They are seeking a seasoned VP of Data Engineering to lead their data engineering team, ensuring the reliability and scalability of their data infrastructure while managing multiple teams and collaborating across departments.
Responsibilities:
- Lead, mentor, and grow a team of data engineers, building a culture of ownership, craft, and continuous improvement
- Own hiring, onboarding, and performance management for the data engineering function
- Act as a technical role model — setting high standards while remaining approachable and supportive
- Own the architecture, reliability, and evolution of our GCP data platform — including BigQuery, Cloud Composer/Airflow, Dataflow, Pub/Sub, and GCS
- Design and maintain robust, scalable pipelines for ingesting, transforming, and serving diverse alternative datasets (web, CPG, transaction data, etc.)
- Drive infrastructure best practices: cost optimisation, observability, incident response, and disaster recovery
- Ensure data security, access controls, and compliance standards appropriate for regulated financial services clients
- Translate business priorities and client requirements into a clear, deliverable technical roadmap
- Partner with data operations, data science, and product teams to accelerate dataset onboarding and expand platform capabilities
- Represent data engineering at the leadership level — contributing to company strategy and advocating for data quality as a core business value
Requirements:
- 10+ years in data engineering, with at least 3 years in a leadership or management role (managing managers and teams)
- Proven experience building and operating large-scale data pipelines on Google Cloud Platform
- Track record of delivering high-quality data infrastructure in a fast-moving, commercially sensitive context
- Deep expertise across the GCP data stack: BigQuery, Dataflow, Pub/Sub, GCS, Cloud Composer, and related services
- Hands-on experience with dbt (Core or Cloud) for scalable transformation layer design, including modeling patterns, testing frameworks, and documentation standards
- Strong understanding of data pipeline design, ELT/ETL patterns, data modelling, and workflow orchestration
- Solid grasp of data governance, quality frameworks, and security best practices
- Familiarity with the unique challenges of alternative data — diverse formats, inconsistent schemas, high ingestion volumes, and strict data provenance requirements
- Practical experience with infrastructure-as-code tooling (Terraform and/or Pulumi) for provisioning and managing cloud resources; able to set IaC standards and review infrastructure changes with the same rigour applied to application code
- Strong proficiency in Python and SQL; comfortable reviewing code and setting engineering standards across the team
- Excellent communicator with the ability to engage engineers, data scientists, and commercial stakeholders alike
- Strong hiring instincts and a genuine passion for developing people
- Pragmatic and decisive — able to balance technical rigour with commercial urgency
- High ownership mindset; comfortable operating with autonomy in a high-stakes environment
- Experience in fintech, alternative data, financial data, or another data-as-a-product environment strongly preferred
- Experience supporting data science or ML teams with feature engineering infrastructure
- Familiarity with data licensing, provenance tracking, or data vendor management
- Experience with data mesh or data-as-a-product organisational models
- Open source contributions or published technical work