Build and maintain scalable, high-performance data infrastructure designed to support both batch and real-time use cases for a wide variety of global stakeholders
Design and implement architectural solutions that drastically reduce friction in data access across the organization.
Collaborate cross-functionally with Engineering, Data, and Product teams to align the platform with diverse technical requirements.
Understand and integrate best security practices to ensure access compliance across all data assets
Troubleshoot and resolve complex issues in distributed data systems to ensure total data integrity.
Architect and implement AI-driven diagnostic systems to autonomously identify, analyze, and resolve complex issues impacting data availability and platform health.
Requirements
8+ years of experience in data infrastructure or backend software engineering with a "Data Fanatic" mindset.
Cloud Mastery: Strong expertise in cloud environments, container orchestration, and event streaming technologies for managing large-scale infrastructure.
Data Processing: Strong experience with distributed data processing and OLAP technologies such as Spark, Trino, and ClickHouse; Snowflake or Databricks experience is a plus.
Technical Excellence: Ability to write clean, maintainable code in Python, GoLang, Java, Ruby, or C++.
Proactive Leadership: A "can-do" attitude with a proven track record of solving complex problems and taking initiative.
Collaborative Spirit: Strong ability to work across teams to deliver high-quality, high-impact results.
System Observability: Expertise in building highly observable platforms using tools like Prometheus, Grafana, or Datadog to ensure deep visibility and proactive reliability of global systems.