Perplexity is an AI-powered answer engine that is rapidly growing as one of the world’s leading AI platforms. They are seeking experienced Data Platform Engineers to design, build, and scale foundational data systems that support product features, AI research, and analytics.
Responsibilities:
- Design and operate large-scale batch and streaming data pipelines supporting product features, AI training/evaluation, analytics, and experimentation
- Build and evolve event-driven and streaming systems (e.g., Kafka/Kinesis/PubSub-style architectures) for real-time ingestion, transformation, and delivery
- Own batch processing frameworks for backfills, aggregations, and offline computation
- Lead the design and operation of data orchestration systems (e.g., Airflow, Dagster, or equivalent), including scheduling, dependency management, retries, SLAs, and observability
- Establish strong guarantees around data correctness, freshness, lineage, and recoverability
- Design systems that handle scale, partial failure, and evolving schemas
- Build self-serve data platforms that empower engineers, data scientists, and analysts to safely create and operate pipelines
- Improve developer experience for data work through better abstractions, tooling, documentation, and paved paths
- Set standards for data modeling, testing, validation, and deployment
- Drive architectural decisions across data infrastructure for storage, compute, orchestration, and APIs
- Partner closely with engineering and data science teams to align data systems with evolving requirements
- Mentor engineers, review designs, and raise the technical bar across the organization
Requirements:
- 5+ years (Senior) or 8+ years (Staff) of software engineering experience
- Strong experience building production data infrastructure systems
- Hands-on experience with batch and/or streaming data processing at scale
- Deep familiarity with data orchestration systems (Airflow, Dagster, or similar)
- Proficiency in Python and at least one additional backend language (Go, TypeScript, etc.)
- Strong systems thinking: you understand tradeoffs across reliability, latency, cost, and complexity
- Experience supporting ML/AI workflows, training pipelines, or evaluation systems
- Familiarity with data quality, lineage, observability, and governance tooling
- Prior ownership of internal platforms used by many teams