TRM Labs provides blockchain analytics and AI solutions to help various agencies and businesses detect and disrupt financial crime. As a Senior Data Engineer on the Data Product team, you will design and build critical data services that analyze blockchain transaction activity at scale, contributing to a safer financial system.
Responsibilities:
- Build highly scalable features that integrate with dozens of blockchains
- Design and architect intricate data models for optimal storage and retrieval to support sub-second latency for querying blockchain data
- Collaborate across departments, partnering with data scientists, backend engineers, and product managers to design and implement novel data models that enhance TRM’s products
- Collaborate across departments, partnering with data scientists, backend engineers, and product managers to design and implement novel data models that enhance TRM’s products
- You’ll work closely with product managers, data scientists, and customer-facing teams to deeply understand user needs and translate them into scalable data solutions
- Our best engineers are also exceptional communicators. They write clear design docs, proactively share tradeoffs, and build alignment across disciplines
Requirements:
- Bachelor's degree (or equivalent) in Computer Science or a related field
- A proven track record, with 5+ years of hands-on experience in architecting scalable API development, distributed system architecture, guiding projects from initial ideation through to successful production deployment
- Exceptional programming skills in Python, as well as adeptness in SQL or SparkSQL
- Versatility that spans the entire spectrum of data engineering in one or more of the following areas:
- In-depth experience with data stores such as BigQuery and Postgres
- You've owned 0–1 systems: building pipelines, data platforms, or ML/BI workflows from scratch—not just maintaining legacy
- You simplify complexity. You're skilled at writing and communicating technical decisions clearly to both technical and non-technical stakeholders
- You're cost-conscious: you design for performance, scale, and efficiency
- You've mentored engineers or analysts and enjoy leveling up those around you
- Proficiency in data pipeline and workflow orchestration tools like Airflow and DBT
- Expertise in data processing technologies and streaming workflows including Dataflow, Spark, Kafka, and Flink
- Competence in deploying and monitoring infrastructure within public cloud platforms, utilizing tools such as Docker, Terraform, Kubernetes, and Datadog
- Proven ability in loading, querying, and transforming extensive datasets
- Bonus: Experience with LLMs or AI-powered workflows (e.g., prompt engineering, internal tooling, semantic search)