WEX is seeking a Sr. Staff Data Platform Engineer to be the primary architect and visionary for their core data infrastructure. This role involves building high-performance engines and frameworks to enable other engineers to work more efficiently while ensuring the data platform remains a competitive advantage.
Responsibilities:
- Architectural Sovereignty: Define the 3-5 year technical roadmap for the Data Lakehouse. You aren't just using tools; you are deciding how storage, compute, and metadata layers (e.g., Apache Polaris, Unity Catalog or Datahub Catalog) interact at an elemental level
- Platform-as-a-Product: Build internal SDKs, CLI tools, and automated orchestration frameworks. Your goal is to abstract away cloud complexity via Control Planes and Custom Operators, allowing Data Engineers to focus on business logic rather than infrastructure boilerplate
- Internal R&D: Prototype and benchmark emerging technologies (e.g., specialized Spark extensions, or Rust-based data accelerators) to keep the platform at the bleeding edge of performance and cost-efficiency
- Global Governance & Security: Architect "compliance-by-design" systems. Automate data lineage, PII masking, and fine-grained access control across petabyte-scale environments using Open Policy Agent (OPA) or similar frameworks without sacrificing developer velocity
- Engineering Excellence & Influence: Set the gold standard for code quality and system design across the company. You will lead Cross-Functional Architecture Reviews and serve as the final escalation point for the most complex system outages or performance bottlenecks
- Organizational Mentorship: Beyond individual mentoring, you will foster an "Engineering Community," influencing the hiring bar and professional development paths for the entire data engineering organization
Requirements:
- 15+ years in software engineering and distributed systems, with at least 4 years in a principal or staff-level capacity leading platform-scale initiatives
- Expert-level proficiency in Java or Scala (JVM tuning is a must) and Python
- Deep experience building extensible frameworks—you have built libraries, APIs, or platforms used by other developers, not just end-user applications
- Advanced Docker Mastery: Deep understanding of container internals, multi-stage builds for polyglot environments, and security hardening (e.g., distroless images, rootless containers)
- Advanced mastery of Kubernetes (EKS) for stateful workloads
- Experience building custom CRDs (Custom Resource Definitions) and Operators to automate the lifecycle of data engines
- Proven ability to architect across AWS and Azure, building cloud-agnostic abstractions that allow workloads to migrate or burst across providers (e.g., leveraging S3/ADLS Gen2 and EMR/HDInsight frameworks)
- Deep internal knowledge of Apache Iceberg, Hudi, or Delta Lake (metadata management, manifest files, and compaction strategies)
- Experience contributing to or deeply customizing open-source data projects (e.g., Spark, Trino, Flink)
- Expertise in Terraform or Pulumi at an enterprise scale
- Experience integrating LLMs and AI agents into the software development lifecycle (SDLC) to automate code reviews, generate unit tests, and perform automated root-cause analysis of system failures
- Ability to architect the infrastructure required to support AI Agent development by enabling vector database integration
- Proven track record of 'leading by influence'—driving adoption of new technologies across multiple autonomous teams
- Ability to communicate complex architectural trade-offs (e.g., 'Latency vs. Consistency' or 'Build vs. Buy') to C-suite executives and junior engineers alike
- Experience with Rust or Go for systems-level tooling or sidecars is a major advantage
- Bachelor's or Master's degree in Computer Science (Distributed Systems focus) preferred, or equivalent deep industry experience