Pomeroy is seeking a Databricks Platform Engineer to implement and operationalize core platform capabilities on Databricks running on AWS. The role focuses on building ingestion pipelines, ensuring data quality and governance, and implementing CI/CD automation.
Responsibilities:
- Implement metadata-driven ingestion frameworks and standard ingestion patterns
- Build batch, incremental, and CDC ingestion pipelines
- Ensure pipelines follow platform standards for configuration, schema evolution, tagging, and error handling
- Implement pipeline monitoring, metrics, and alerting
- Support operational validation, failure simulation, and recovery testing
- Embed data quality rules and quarantine patterns
- Apply Unity Catalog tags (domain, owner, sensitivity, freshness) at ingestion time
- Ensure governance metadata is written to governed schemas
- Build CI/CD pipelines using GitHub Actions
- Support configuration validation, environment promotion, and rollback
- Ensure secret management via AWS Secrets Manager
Requirements:
- 3-5 years data or platform engineering experience
- 3+ years hands-on Databricks experience
- Strong PySpark and SQL
- Experience with Delta Lake, Unity Catalog, Git-based development
- Ingestion framework experience
- Monitoring and alerting implementation
- Production platform support experience