Defining and evolving data contract standards across the company, including schema enforcement, versioning, and validation patterns.
Designing interoperable ingestion and publishing frameworks that enable upstream producers (e.g., product engineering teams) to integrate seamlessly with the data platform.
Building and standardizing APIs, libraries, or SDKs that simplify event logging, schema validation, and contract compliance.
Establishing best practices for schema registry usage and distributed schema validation across streaming and batch systems (e.g., Kafka-based systems).
Designing clear patterns for:
When to use the data lake vs. the warehouse
How curated layers are exposed
How downstream consumers access data safely
Leading reverse ETL and activation architecture to support operational use cases.
Defining and enforcing access control, governance, and compliance standards (e.g., PHI/PII handling, DEID boundaries, RBAC).
Partnering with Product Engineering, Security, Compliance, Analytics Engineering, and Infrastructure to align on standards and long-term direction.
Mentoring engineers and influencing engineering culture around data quality, ownership, and contracts.
Driving adoption of AI-assisted development practices (e.g., cloud-based coding environments, internal AI tooling, or agentic workflows) to accelerate platform delivery.
Designing guardrails for AI access to data systems, including scoped permissions, auditing, and compliance-aware controls.
Partnering with product and AI teams to ensure our data contracts, schemas, and curated layers are AI-consumable and safe by default.
Evaluating how internal data platform assets can power AI use cases and intelligent automation across the company.
Requirements
8+ years of experience in data engineering, platform engineering, or backend platform development.
Demonstrated experience designing data contracts, schema governance, or producer/consumer standards at scale.
Strong expertise in Python and SQL, with hands-on experience building scalable data frameworks.
Experience with distributed data systems such as Spark (Databricks or EMR) and modern lakehouse architectures (Delta Lake / Iceberg).
Experience with data warehouses such as Snowflake and strong understanding of performance and access patterns.
Familiarity with schema registry systems and schema evolution in streaming systems (e.g., Kafka).
Experience building APIs, shared libraries, or platform services adopted by multiple teams.
Strong understanding of access control, RBAC, and compliance constraints in regulated environments.
Proven ability to lead cross-functional architectural initiatives across product, analytics, and infrastructure teams.
Clear communication skills and a track record of influencing standards across an organization.
Experience working with AI-assisted development tools or cloud-based coding environments (e.g., Claude Code, Codex, Cursor, internal code generation frameworks, or similar systems).
Strong understanding of governance considerations for GenAI systems, including access control, prompt safety, sensitive data handling, and auditability.
Perspective on how structured data models and contracts improve AI reliability and downstream automation.
Tech Stack
Cloud
ETL
Kafka
Python
Spark
SQL
Benefits
Flexible, hybrid work environment at our convenient Soho location
Unlimited Vacation
100% paid employee health benefit options (including medical, dental, and vision)
Commuter Benefits
401(k) with employer funded match
Corporate wellness program with Wellhub
Sabbatical leave (for employees with 5+ years of service)
Competitive paid parental leave and fertility/family planning reimbursement
Cell phone reimbursement
Catered lunch everyday along with beverages and snacks
Employee Resource Groups and ZocClubs to promote shared community and belonging