Asurion helps people protect, connect, and enjoy the latest tech—making life a little easier. The Staff Data Engineer leads the design and delivery of scalable data solutions that power analytics and reporting, focusing on optimizing data infrastructure and ensuring data quality.
Responsibilities:
- Lead the design and development of scalable data pipelines and data products using Databricks, Spark, and Delta Lake
- Develop and optimize data transformations and ELT workflows using SQL and Python
- Design and implement data models and curated datasets to support analytics and reporting use cases
- Ensure data quality, consistency, and reliability through validation, monitoring, and testing practices
- Optimize pipeline performance, scalability, and cost efficiency within AWS environments
- Apply best practices for data partitioning, storage optimization, and query performance tuning
- Collaborate with product, analytics, and business teams to translate requirements into efficient data solutions
- Provide technical leadership and mentorship to engineers, including code reviews and design guidance
- Leverage AI tools for coding, validation, and documentation assistance to enhance productivity and code quality
- Troubleshoot and resolve data pipeline failures, latency issues, and data inconsistencies
- Continuously evaluate and improve data engineering workflows and tooling
Requirements:
- 8+ years of experience in data engineering or data pipeline development
- Strong hands-on experience with Databricks, Apache Spark, and Delta Lake
- Advanced proficiency in SQL and Python for building and optimizing data pipelines
- Experience developing robust ETL/ELT pipelines and handling complex data transformations
- Hands-on experience with AWS cloud services (e.g., S3, EMR, Lambda, Glue, Redshift, Kinesis)
- Strong understanding of data modeling and data warehousing concepts
- Experience working with large-scale datasets (TB+ or greater) in distributed environments
- Knowledge of data quality frameworks, validation techniques, and monitoring practices
- Familiarity with CI/CD pipelines and modern development workflows
- Experience using AI-assisted development tools for code generation, validation, or documentation
- Strong problem-solving skills with the ability to debug complex data issues
- Bachelor's degree in Computer Science, Engineering, or related field
- 8+ years of relevant experience in data engineering
- Proven experience leading technical initiatives or large-scale data pipeline projects
- Master's degree in a technical field
- Experience in large-scale, enterprise data environments
- Cloud certifications (AWS, Databricks)