WeightWatchers is a global digital health company that is redefining the category of weight health care. They are seeking a seasoned Staff Data Engineer to drive scale, performance, and actionability within their data ecosystem, focusing on building core capabilities for dynamic analytics and self-service data workflows.
Responsibilities:
- Support the analytics layer by developing core Looker views and building out our agentic AI infrastructure to derive automated self-service capabilities and reduce reporting bottlenecks
- Design, build, and scale ELT pipelines that are resilient, efficient, and modular
- Partner with Finance, Product, and Analytics to ensure our data models solve the right problems
- Build and maintain analytics schemas (including Star Schemas) that abstract complex logic into user-friendly datasets
- Lead projects from inception to production, taking accountability for data integrity and the trustworthiness of the platform
- Monitor production health using monitoring tools (e.g. Datadog, Monte Carlo) ensuring our data ecosystem remains robust and reliable
- Act as a technical lead, conduct code reviews, define engineering culture, and champion best practices
Requirements:
- 5+ years in data engineering, with at least 2+ years focused on distributed, large-scale cloud data warehouses
- Proven experience with Snowflake performance optimization and cost-governance
- Deep proficiency in Python, with a focus on writing modular, reusable, and testable code (unit/integration tests) for complex data processing
- Expert-level SQL and a sophisticated understanding of data warehousing methodologies to build performant, scalable analytics layers
- Practical experience with modern CI/CD frameworks (e.g., GitHub Actions, Argo CD) to drive engineering velocity and platform stability
- Expertise in architecting and scaling orchestration-as-code workflows (e.g., Prefect or Airflow) to manage complex dependencies and ensure pipeline resilience
- Deep proficiency in deploying monitoring and alerting frameworks (e.g., Datadog) to maximize system uptime while mitigating alert fatigue
- Experience managing business-critical production pipelines with a focus on uptime, data quality, and defining SLAs
- BS/MS in Computer Science, Information Systems, or a related technical field
- Familiarity with Snowflake Cortex/MCP is a plus