WTW is a company that helps clients control health and welfare plan costs and improve health outcomes. They are seeking a Principal Data Engineer to design and build scalable data platforms that support analytics and advanced AI use cases, while also mentoring engineering teams and driving platform modernization.
Responsibilities:
- Lead the design and evolution of enterprise‑scale data platforms, including ingestion, transformation, storage, and consumption layers
- Define and enforce data engineering standards, patterns, and best practices (e.g., pipeline design, testing, CI/CD, observability)
- Serve as a technical authority on data architecture decisions, trade‑offs, and platform strategy
- Design, build, and optimize high‑performance batch and streaming data pipelines
- Improve reliability, performance, scalability, and cost efficiency of existing data assets
- Implement reusable frameworks and reference architectures to accelerate delivery across teams
- Design and operate Databricks‑based lakehouse solutions using Delta Lake, Spark, and notebooks/jobs
- Establish best practices for Databricks workspace architecture, security, performance tuning, and cost management
- Lead or support migrations to Databricks and modernization of legacy data pipelines
- Partner with data governance and security teams to embed data quality checks, metadata, lineage, and access controls into pipelines
- Ensure platforms support regulatory, compliance, and enterprise risk requirements
- Work closely with data architects, analytic engineers, data scientists, product owners, and business stakeholders
- Translate business and analytic requirements into scalable technical solutions
- Influence roadmaps by tying platform investments to measurable business outcomes
- Mentor senior and mid‑level engineers; raise the overall technical bar of the team
- Lead by example through hands‑on contribution to critical pipelines and platform components
Requirements:
- University or college degree in Computer Science, Information Systems, Mathematics, Engineering, Statistics, or related field of study
- 8+ years of experience in data engineering or platform engineering roles
- Deep expertise in building large-scale data pipelines and platforms in cloud environments (Azure, AWS, or GCP)
- Strong proficiency with distributed data processing (e.g., Spark)
- Advanced SQL and strong Python programming skills
- Experience designing data models and architectures for analytics and downstream consumption
- Strong analytical skills and experience with data modeling
- Strong project management and organizational skills
- Proven ability to work independently and with a team
- Prior experience setting engineering standards or leading platform modernization initiatives
- Experience in data ingestion tools such as Matillion, FiveTran, Airbyte, etc
- Extensive hands-on experience with Databricks and Lakehouse architecture strongly preferred
- Experience with Delta Lake, Spark optimization, and cost/performance tuning strongly preferred
- Experience supporting advanced analytics, machine learning, or AI-driven use cases
- Familiarity with modern data tooling (e.g., orchestration, data quality frameworks, metadata/catalog tools)