Digital Media Solutions (DMS) is a performance-driven digital marketing company that connects consumers and brands through data, technology, and proprietary media solutions. The Manager, Data Engineering will lead a team responsible for the reliability, scalability, and performance of data infrastructure, managing platforms and pipelines that power DMS’s data-driven business.
Responsibilities:
- Lead, mentor, and grow a team of data engineers and architects
- Define and execute the technical roadmap for production database systems (MySQL, PostgreSQL, DynamoDB, Elastic)
- Own the architecture and governance of binlog replication, logical replication, and CDC workflows
- Drive strategy and reliability for ELT/ETL pipelines and Kafka-based streaming architectures
- Set standards for performance optimization, query tuning, indexing, and database scaling across teams
- Oversee backup, failover, disaster recovery (PITR), and incident response for all production data systems
- Drive cost efficiency, infrastructure optimization, and monitoring across cloud-managed data services (AWS RDS, Aurora, DynamoDB)
- Champion data integrity, security, and compliance standards across all data engineering work
- Partner cross-functionally with backend, data science, infrastructure, and product teams to align on data platform priorities
- Establish engineering guardrails, best practices, and documentation to enable team autonomy and quality at scale
- Lead the evaluation and selection of next-generation data warehousing technology (Snowflake, Databricks, AWS Redshift Serverless) — assessing performance, cost, ecosystem fit, and migration complexity to inform a platform decision
- Own the design of an upgraded data model for the warehouse in partnership with data engineers and architects, establishing standards for schema design, partitioning, access patterns, and downstream consumption
- Oversee the end-to-end migration from the current Redshift warehouse — planning the phased approach, managing cutover risk, and ensuring continuity of downstream reporting and analytics throughout
Requirements:
- 8+ years of data or database engineering experience, with 2+ years in an engineering management role
- Deep hands-on experience with MySQL and/or PostgreSQL in high-availability production environments
- Proven track record leading and developing teams of engineers in fast-paced, data-intensive environments
- Strong expertise in binlog/logical replication and integrating OLTP systems with Kafka
- Experience designing and supporting ELT/ETL pipelines safely at scale
- Deep understanding of ACID principles, transaction isolation, and database internals
- Experience with AWS-managed database services (RDS, Aurora, DynamoDB) and infrastructure cost management
- Experience evaluating or migrating data warehouse platforms (Redshift, Snowflake, Databricks, or similar)
- Experience designing or redesigning dimensional or analytical data models for large-scale warehouse environments
- OLTP vs OLAP workload separation and architectural decision-making experience
- Ability to balance performance, cost, and compliance at an organizational level
- Strong communication skills with the ability to translate technical complexity for non-technical stakeholders
- Experience with CDC implementation and large-scale database migrations
- Infrastructure-as-code experience (Terraform)
- Background in high-volume, transaction-intensive production environments
- Prior experience building or scaling a data engineering function from the ground up