PrizePicks is the fastest-growing sports company in North America, recognized for its leading platform in Daily Fantasy Sports. The Data Engineering Manager will lead a team responsible for building and operating scalable data engineering pipelines, ensuring data is trusted and accessible to support analytics and business decision-making.
Responsibilities:
- Lead, mentor, and develop a team of Data Engineers through regular 1:1s, coaching, and performance feedback
- Own hiring, onboarding, career development, and performance management for the team
- Foster a culture of accountability, ownership, collaboration, and continuous improvement
- Balance delivery expectations with sustainable team practices
- Own the design, development, and operation of data ingestion, transformation, and storage pipelines (batch and streaming)
- Ensure reliability, scalability, performance, and cost efficiency of data systems
- Drive best practices in data modeling, data quality, observability, and operational excellence
- Review and guide technical designs, architecture decisions, and code quality
- Partner with platform and infrastructure teams on tooling, security, and scalability
- Translate business and analytics requirements into scalable technical solutions
- Partner with Product, Analytics Engineering, Data Science, and Business teams to deliver trusted datasets and metrics
- Own sprint planning, prioritization, and delivery commitments
- Communicate progress, risks, and tradeoffs clearly to stakeholders and leadership
- Establish and enforce data engineering standards, patterns, and documentation
- Support data governance, data contracts, and schema management
- Ensure compliance with security, privacy, and data retention requirements
Requirements:
- A graduate degree in Computer Science, Mathematics, or a related quantitative field
- 8+ years of experience in Data Engineering or related roles
- 3+ years leading engineers (hiring, coaching, delivery accountability)
- Strong distributed systems fundamentals (scaling, reliability, data correctness)
- Hands-on ability in Python and/or Java (can dive into code and unblock delivery)
- Proficiency with SQL and at least one programming language (Python, Java, or Scala)
- Experience with orchestration tools (Airflow, Argo, etc.)
- Solid understanding of data modeling, ETL/ELT patterns, and data reliability
- Experience with streaming technologies (Kafka, Flink, Spark Streaming)
- Experience with modern data warehouses / lakehouses (BigQuery, Snowflake, Redshift, Iceberg)
- Familiarity with analytics engineering tools (dbt, semantic layers)
- Exposure to data governance, observability, and data quality tooling
- Self-starter + ownership: drives ambiguous problems to durable solutions
- Builds and grows teams: mentors, raises the bar, and develops talent
- Experience with cloud data ecosystems (GCP - Preferred, AWS, or Azure)