Carrot is a global fertility and family care platform that supports members and their families through significant life moments. The Senior Data Engineer will lead the development and maintenance of data infrastructure, ensuring reliable data solutions and collaborating with various stakeholders to enhance analytics and reporting capabilities.
Responsibilities:
- Lead the evolution, reliability, and scalability of the modern data infrastructure across analytics, reporting, and business intelligence
- Architect, build, and maintain robust, automated data pipelines
- Orchestrate workflows and integrate diverse data sources
- Partner with stakeholders across the business to deliver secure, compliant, high-quality data solutions
- Span the full data lifecycle, from architecting resilient ETL/ELT pipelines to developing real-time and batch data integrations
- Automate reporting, optimize technical debt, and enable advanced analytics
- Drive improvements in system maturity, cost efficiency, automation, and collaborative practices
Requirements:
- Expert-level proficiency in Snowflake, dbt, and Python for data modeling, transformation, pipeline development, and advanced analytics
- Advanced SQL skills for complex querying, large-scale data model design, and database optimization
- Proven experience architecting, building, testing, deploying, and maintaining scalable, automated ETL/ELT pipelines using modern orchestration tools such as Prefect and Airflow
- Hands-on administration of cloud-based data warehouse and lake platforms—Snowflake, AWS (S3, Redshift), Google Cloud—including integration of new sources and performance optimization
- Strong understanding of secure external data flows through SFTP, Files.com: Next-Generation MFT, SFTP & File Sharing, and APIs, with a focus on reliability and compliance
- Mastery of version control (Git, GitHub) and structured workflow management (Jira) for code review, audit trails, and operational transparency
- Demonstrated ability to automate and simplify complex manual tasks, building robust, reusable pipeline components that reduce operational overhead
- Depth in orchestrating workflows with custom scheduling, alerting, dependency management, monitoring, and error resolution
- Dependability, adaptability, and a strong collaborative approach to engineering, thriving in dynamic and ambiguous environments
- Extensive hands-on experience in dynamic, fast-paced settings, including startup or rapid-growth environments, delivering technical solutions under shifting priorities
- Direct background supporting cross-functional teams—business intelligence, analytics, and data science—by enabling large-scale integrations, multi-tenant reporting, and advanced analytics tooling
- Familiarity with both batch and event-driven ingestion paradigms, including building near-real-time data pipelines utilizing Snowflake, dbt, and Python
- Experience automating complex operational, financial, or compliance-driven workflows within cloud data environments
- Exposure to audit, privacy, and compliance frameworks (SOC, HIPAA, GDPR, ISO, SOX), particularly in implementing robust data governance and secure access controls
- Experience working with regulated industries such as healthcare, healthtech, or SaaS environments