AirflowDistributed SystemsDynamoDBJavaKafkaMicroservicesPythonSQLGoAIMachine LearningMLLarge Language ModelsLangGraphSnowflakeKinesisBigTableMemcachedLeadership
About this role
Role Overview
Design and build distributed systems that process, enrich, and respond to billions of behavioral events per day in real time
Develop high-performance APIs and services that support advertising, identity, and intent features across the Marketing Platform
Leverage machine learning and large language models (LLMs) to analyze behavioral data, classify content, extract signals, and enable intelligent decision-making
Build intelligent agents using frameworks like LangGraph or MCP to reason over data and power user-facing insights
Design and operate data pipelines using tools like Kafka, Kinesis, and ClickHouse to support both streaming and batch workloads
Drive quality, performance, scalability, and observability across all systems you own
Collaborate cross-functionally with product managers, data scientists, and engineers to deliver customer-facing features and internal tooling
Contribute to technical leadership and mentorship of teammates
Requirements
8+ years of backend, data, or infrastructure engineering experience, or equivalent impact and leadership.
Strong experience in at least one of the following:
Distributed systems engineering (e.g., building low latency high and throughput APIs, scalable microservices, event processing pipelines)
Big data infrastructure (e.g., streaming, warehousing, low-latency storage at scale)
Applied AI/ML, including use of LLMs for extraction, classification, or reasoning tasks
Proficiency in one or more core languages: Java, Go, Python
Solid grasp of SQL and large-scale data modeling
Familiarity with databases and tools such as: ClickHouse, DynamoDB, Bigtable, Memcached, Kafka, Kinesis, Firehose, Airflow, Snowflake.