Home
Jobs
Saved
Resumes
Data Engineer at Samba TV | JobVerse
JobVerse
Home
Jobs
Recruiters
Companies
Pricing
Blog
Jobs
/
Data Engineer
Samba TV
Website
LinkedIn
Data Engineer
Poland
Full Time
1 hour ago
H1B Sponsor
Apply Now
Key skills
Airflow
Apache
AWS
BigQuery
Cloud
Kubernetes
PySpark
Python
Scala
Spark
SQL
Unity
Data Engineering
Snowflake
Databricks
Apache Spark
Apache Airflow
S3
About this role
Role Overview
Build scalable data product architecture
Responsible for modernizing data frameworks and integrations with Databricks and BigQuery
Upgrade and reduce toil for developers on Apache Airflow
Develop and optimize data transformations using Apache Spark (PySpark/Scala)
Build procedures and guidelines to help teams operate with data
Identify bottlenecks in our development lifecycle and find solutions to improve them
Drive innovation throughout the tech org by evangelizing and educating teams on best practices and new technologies
Work directly with our data teams and FinOps teams to drive efforts that span across teams
Implement data governance, access control, and auditing using Databricks Unity Catalog
Build and integrate automated, reusable data validation suites using data quality frameworks (Great Expectations or similar)
Implement monitoring and anomaly detection systems for data quality, reliability and performance
Develop and manage REST APIs to support secure data access, automation, and integration
Collaborate with data scientists, analysts, and software engineers to deliver governed, reusable data assets
Implement monitoring, logging, and alerting for data workflows
Optimize cost and performance of cloud-based data infrastructure
Requirements
Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience)
5+ years of experience in data engineering or a related role
Strong hands-on experience with Databricks, Apache Spark and BigQuery or Snowflake
Proven experience with Modern table formats such as Delta Lake and Iceberg
A deep understanding of the data lifecycle and how teams operate with data
Hands-on experience implementing data governance and metadata management using Databricks Unity Catalog
Experience managing and extending Apache Airflow (custom operators, plugins, infrastructure)
Experience with Kubernetes
Solid experience with AWS cloud services, especially S3 and data-related services
Experience with data validation and data quality principles and working with SLA systems
Proficiency in Python and SQL
Tech Stack
Airflow
Apache
AWS
BigQuery
Cloud
Kubernetes
PySpark
Python
Scala
Spark
SQL
Unity
Benefits
Equal opportunity employer
Celebration of diversity
Commitment to an inclusive environment
Apply Now
Home
Jobs
Saved
Resumes