Sezzle is a fintech company focused on revolutionizing the shopping experience through innovative payment solutions. They are seeking a Principal Engineer for their Data Infrastructure to take ownership of their database and data warehousing systems, ensuring performance and scalability while evaluating new technologies.
Responsibilities:
- End to end ownership of our database and data warehousing infrastructure, its KPIs and its SLAs, including MySQL, Postgres, CDC capture, and Redshift
- Lead optimizations in database and query performance
- Build and maintain tooling to help manage and upgrade our data infrastructure, continually improving with migrations and optimizations
- Partner with Development to ensure we ship solid queries against a battle-tested schema to meet the needs of our products and business
- Help Sezzle evolve our data systems beyond the current scale and stack with whatever tooling is necessary to handle processing high volumes of writes and events while performing complex ETL
- Evaluate and integrate new technologies, guiding the evolution of Sezzle’s data infrastructure
- Help data engineers and analysts optimize Redshift and warehouse performance, including query tuning, modeling improvements, and cost management
Requirements:
- 12+ years of experience in DBA, SRE, or Data Engineering roles, with a strong track record of scaling production-grade systems
- Deep expertise with MySQL, Postgres, AWS Redshift or similar products, including performance tuning, table design, and workload management
- Advanced proficiency in SQL
- Strong hands-on experience with data replication and ETL/ELT frameworks, especially DBT, AWS DMS, or similar tools
- Strong understanding of data modeling, distributed systems, and warehouse/lake design patterns
- Ability to work in a fast-paced, collaborative environment with excellent communication and documentation skills
- Prior experience in high-growth, data-intensive fintech or similar regulated environments
- Knowledge of lakehouse architectures and modern stacks such as Snowflake, Databricks, Iceberg, or Delta Lake
- Some background in designing scalable, fault-tolerant data pipelines using modern orchestration tools (Airflow, Dagster, Prefect, etc.) processing anywhere from 100GB to 1 TB of new data a day
- Familiarity with streaming technologies (Kafka, Kinesis, Flink, Spark Streaming)
- High enthusiasm for using AI to be as productive as possible