Home
Jobs
Saved
Resumes
Senior Data Platform Engineer – Data Core Processing Team at Warner Bros. Discovery | JobVerse
JobVerse
Home
Jobs
Recruiters
Companies
Pricing
Blog
Jobs
/
Senior Data Platform Engineer – Data Core Processing Team
Warner Bros. Discovery
Website
LinkedIn
Senior Data Platform Engineer – Data Core Processing Team
India
Full Time
1 hour ago
No Sponsorship
Apply Now
Key skills
Airflow
Amazon Redshift
Apache
AWS
Azure
BigQuery
Cassandra
Cloud
Docker
DynamoDB
ETL
Google Cloud Platform
Grafana
Java
Kafka
Kubernetes
NoSQL
Postgres
Prometheus
Python
Scala
Spark
SQL
Terraform
ELT
Data Engineering
Data Lake
Snowflake
Redshift
Databricks
Apache Spark
Apache Kafka
Apache Airflow
GCP
Google Cloud
CloudFormation
S3
CloudWatch
Kinesis
PostgreSQL
Datadog
Git
Version Control
CI/CD
About this role
Role Overview
Has a track record of having built multiple high-performance, stable, scalable data systems that have been successfully shipped to production
Drives best practices and sets standards for the team in areas like data modeling, pipeline architecture, and system design
Is a key influencer in the team's strategy and contributes significantly to the team planning
Shows good judgment making trade-offs between immediate and long-term business needs
Is a result-driven creative thinker who drives innovation and produces delightful experiences for our data consumers and internal customers
Is an advocate for data-driven decision-making, has an insatiable curiosity, and loves to invent and innovate to solve difficult problems
Takes ownership of their work and consistently delivers results in a fast-paced environment
Troubleshoot production issues by reviewing logs, metrics, data pipelines, and system health to pinpoint specific problems and then resolve them
Identify root causes and identify learnings to improve both development processes and system design
Provide guidance on design, coding, and operational best practices for data engineering
Mentor junior engineers, overseeing their designs, code quality, and integration into a team.
Requirements
Proficient in Python and/or Scala/Java with 5-9 years of total experience in data engineering or related fields
Strong experience with distributed data processing frameworks such as Apache Spark, Flink, or similar technologies
Deep understanding of data pipeline orchestration tools like Apache Airflow, Dagster, or Prefect
Experience with cloud data platforms (AWS, GCP, or Azure) including services like S3, Redshift, BigQuery, Databricks, or Snowflake
Hands-on experience with real-time streaming technologies such as Apache Kafka, Kinesis
Experience with data lake/lakehouse architectures and formats like Parquet, Avro, Delta Lake, or Iceberg
Strong understanding of SQL and database technologies including PostgreSQL, NoSQL databases like DynamoDB or Cassandra
Experience with containerization (Docker) and orchestration (Kubernetes) in data engineering contexts
Familiarity with infrastructure-as-code tools like Terraform or CloudFormation
Ability to implement comprehensive monitoring, alerting, and observability using tools like Prometheus, Grafana, CloudWatch, or Datadog
Strong understanding of data modeling, ETL/ELT patterns, and data warehouse design principles
Experience with CI/CD pipelines and version control systems (Git).
Tech Stack
Airflow
Amazon Redshift
Apache
AWS
Azure
BigQuery
Cassandra
Cloud
Docker
DynamoDB
ETL
Google Cloud Platform
Grafana
Java
Kafka
Kubernetes
NoSQL
Postgres
Prometheus
Python
Scala
Spark
SQL
Terraform
Benefits
A Great Place to work
Equal opportunity employer
Fast track growth opportunities
Apply Now
Home
Jobs
Saved
Resumes