Encora10 is seeking an experienced Data Engineer to join their growing data team and play a key role in modernizing their analytics platform. The role involves designing and maintaining scalable data pipelines using Databricks, leading the migration of PowerBI reports, and collaborating with various stakeholders to ensure data performance and reliability.
Responsibilities:
- Design, build, and maintain scalable data pipelines using Databricks (Delta Lake, Unity Catalog, Spark)
- Lead or significantly contribute to the migration and rehydration of approximately 500 PowerBI reports in 2026, including re-pointing and optimizing data sources
- Implement and maintain CI/CD pipelines for data assets using Databricks Asset Bundles (DAB), GitHub Actions, and other modern DevOps practices
- Collaborate with data analysts, BI developers, and business stakeholders to ensure data availability, performance, and reliability
- Optimize ETL/ELT processes for performance, cost, and maintainability
- Establish best practices for version control, testing, and deployment of notebooks, workflows, and Delta Live Tables
Requirements:
- 4+ years of hands-on data engineering experience (5-7+ years of experience overall)
- Strong proficiency in Python and SQL
- Deep experience with Databricks (workspace administration, cluster management, Delta Lake, Unity Catalog, workflows, and notebooks)
- Proven track record implementing CI/CD for data workloads (preferably using Databricks Asset Bundles and GitHub Actions)
- Solid understanding of Spark (PySpark and/or Spark SQL)
- Experience with infrastructure-as-code and modern data DevOps practices
- Databricks Certified Data Engineer Associate or Professional
- Azure Data Engineer Associate (DP-203) or equivalent AWS/GCP certifications