Chainalysis is a company focused on providing intelligence solutions for blockchain technology. They are seeking a Senior Backend Engineer to design and deliver new platform capabilities, improve data processing services, and lead engineering initiatives that support mission-critical investigations and monitoring workflows.
Responsibilities:
- Design and lead delivery of new platform capabilities that serve mission‑critical investigations and monitoring workflows
- Operate services that ingest, transform, and serve hundreds of terabytes of data with clear SLOs for latency, freshness, and availability
- Improve the scalability, performance, and cost efficiency of our data plane and APIs
- Raise the quality bar across reliability, security, and compliance for both cloud and on‑premises deployments
- Mentor engineers across teams and influence technical strategy beyond your immediate group
- Own and evolve backend services powering customer‑facing APIs, usage/billing, alerting, and data observability
- Lead team and cross-team initiatives end‑to‑end: discovery, architecture, implementation, rollout, and post‑launch learning
- Architect event‑driven and streaming workflows (e.g., Kafka) with strong data contracts and schema evolution
- Drive operational excellence: SLOs, runbooks, on‑call, incident reviews, and capacity plans for high‑QPS systems
- Partner with product, data engineering/science, and security to translate customer requirements into durable systems
Requirements:
- Expert backend engineering experience building cloud‑hosted services and data pipelines on AWS or GCP (bonus: both)
- Deep proficiency with APIs, streaming systems, and distributed systems (e.g., microservices on Kubernetes)
- Demonstrated ownership of systems operating at scale (hundreds to thousands of RPS; TB–PB data volumes)
- High judgment on reliability, security, and cost, with a track record of measurable improvements
- Ability to lead without authority—mentoring, design reviews, and cross‑org influence
- Blockchain domain knowledge (protocol fundamentals; smart contracts/Solidity)
- Databricks experience (Spark, Delta Lake, Delta Live Tables) or PySpark at scale
- Multi‑tenant, usage tracking, and billing systems experience
- On‑premises or regulated/air‑gapped deployments