Lead, mentor, and grow a team of data engineers and architects
Define and execute the technical roadmap for production database systems (MySQL, PostgreSQL, DynamoDB, Elastic)
Own the architecture and governance of binlog replication, logical replication, and CDC workflows
Drive strategy and reliability for ELT/ETL pipelines and Kafka-based streaming architectures
Set standards for performance optimization, query tuning, indexing, and database scaling across teams
Oversee backup, failover, disaster recovery (PITR), and incident response for all production data systems
Drive cost efficiency, infrastructure optimization, and monitoring across cloud-managed data services (AWS RDS, Aurora, DynamoDB)
Champion data integrity, security, and compliance standards across all data engineering work
Partner cross-functionally with backend, data science, infrastructure, and product teams to align on data platform priorities
Establish engineering guardrails, best practices, and documentation to enable team autonomy and quality at scale
Lead the evaluation and selection of next-generation data warehousing technology (Snowflake, Databricks, AWS Redshift Serverless) — assessing performance, cost, ecosystem fit, and migration complexity to inform a platform decision
Own the design of an upgraded data model for the warehouse in partnership with data engineers and architects, establishing standards for schema design, partitioning, access patterns, and downstream consumption
Oversee the end-to-end migration from the current Redshift warehouse — planning the phased approach, managing cutover risk, and ensuring continuity of downstream reporting and analytics throughout
Requirements
8+ years of data or database engineering experience, with 2+ years in an engineering management role
Deep hands-on experience with MySQL and/or PostgreSQL in high-availability production environments
Proven track record leading and developing teams of engineers in fast-paced, data-intensive environments
Strong expertise in binlog/logical replication and integrating OLTP systems with Kafka
Experience designing and supporting ELT/ETL pipelines safely at scale
Deep understanding of ACID principles, transaction isolation, and database internals
Experience with AWS-managed database services (RDS, Aurora, DynamoDB) and infrastructure cost management
Experience evaluating or migrating data warehouse platforms (Redshift, Snowflake, Databricks, or similar)
Experience designing or redesigning dimensional or analytical data models for large-scale warehouse environments
OLTP vs OLAP workload separation and architectural decision-making experience
Ability to balance performance, cost, and compliance at an organizational level
Strong communication skills with the ability to translate technical complexity for non-technical stakeholders.