Lead and manage a team of data engineers, fostering a culture of innovation, ownership, and continuous improvement.
Define and execute the data engineering roadmap aligned with organizational goals and product strategy.
Architect, design, and oversee the implementation of scalable, reliable, and high-performance data pipelines and data platforms.
Collaborate with cross-functional teams including Product, Analytics, Data Science, and DevOps to deliver end-to-end data solutions.
Establish best practices for data governance, data quality, security, and compliance.
Drive the adoption of modern data technologies and frameworks, ensuring systems are future-ready and cost-efficient.
Review system architecture and code to ensure scalability, maintainability, and performance optimization.
Manage stakeholder expectations, prioritize initiatives, and ensure timely delivery of projects.
Mentor team members, support career growth, and build leadership within the team.
Monitor and improve system reliability, uptime, and performance through proactive measures and observability tools.
Requirements
Min Experience: 12 years
Strong experience in data engineering, including building large-scale data pipelines and distributed systems.
Hands-on expertise with modern data stack technologies such as SQL/NoSQL databases, ETL/ELT tools, and data warehousing solutions (e.g., Snowflake, BigQuery, Redshift).
Proficiency in programming languages such as Python, Java, or Scala.
Experience with big data technologies like Apache Spark, Hadoop, or Kafka.
Deep understanding of cloud platforms such as AWS, Azure, or Google Cloud.
Strong knowledge of data modeling, data architecture, and data lifecycle management.
Experience implementing data governance, security, and compliance frameworks.
Familiarity with CI/CD pipelines, DevOps practices, and infrastructure-as-code tools.
Proven track record of leading and scaling high-performing engineering teams.
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.