Affirm is reinventing credit to make it more honest and friendly, giving consumers the flexibility to buy now and pay later without any hidden fees or compounding interest. The role involves designing and building data infrastructure systems to support Affirm's products and business requirements, optimizing data pipelines, and collaborating with other teams to deliver streaming solutions.
Responsibilities:
- Design and build data infrastructure systems, services and tools to handle new Affirm products and business requirements that securely scale over millions of users and their transactions
- Build and optimize high-throughput, low-latency data pipelines for critical business applications
- Build frameworks and services which will be used by other engineering teams at Affirm to manage billions of dollars in loans and power customer experiences
- Help support the Streaming team that forms the backbone for several online and offline workloads at affirm
- Improve the reliability and efficiency of the Data Platform at scale and high availability
- Collaborate with other teams at Affirm, including ML and Analytics, to deliver streaming solutions based on various use-cases and best practices
- Monitor, troubleshoot, and maintain production streaming systems, ensuring high availability and reliability of data systems
Requirements:
- 8+ years of industry experience in building large scale production systems
- Strong hands on experience with Apache Kafka or similar streaming solutions for large-scale event streaming and message queueing
- Experience building and operating robust and highly available infrastructure
- Expertise with at least one stream processing frameworks like Spark, Samza, Flink, Beam etc
- Hands-on experience with Kafka Connect, and Kafka Schema Registry components, understanding its underlying functionality and implementation
- Solid programming skills in Python, Java or Kotlin
- Experience leading technical projects and mentoring junior engineers
- Exceptionally collaborative with a history of delivering complex technical projects and working closely with stakeholders
- This position requires either equivalent practical experience or a Bachelor's degree in a related field
- Experience working with data platforms like Snowflake, Glue or Databricks is a plus
- Experience with Confluent Platform (Schema Registry, Tableflow) is a strong plus
- Experience with Apache Iceberg table format, CDC (Change Data Capture) tools is a strong plus
- Knowledge of Relational and NoSQL databases is a plus