Omnicell is seeking an Engineer IV in Data Engineering to translate business needs into detailed designs and implementation plans. The role involves leading implementation efforts, guiding junior engineers, and ensuring production readiness through various engineering best practices.
Responsibilities:
- Translate business needs and architectural guidance into detailed designs, data contracts, and implementation plans that break down large initiatives into actionable engineering tasks with reliable estimates
- Create detailed pipeline designs covering schemas, transformations, partitioning, DLT configurations, orchestration, error handling, and observability that align with the platform architecture through close collaboration with the Data Architect
- Lead implementation and guide junior engineers on design, coding standards, and best practices
- Develop metadata-driven and configuration-driven pipeline patterns that reduce custom code and improve consistency
- Make technical decisions that ensure reliability, performance, maintainability, and scalability. Ensure production readiness with monitoring, lineage, alerting, observability, CI/CD and documentation
- Define and enforce engineering design patterns, coding standards, testing practices, and operational best practices
- Evaluate and incorporate new technologies and Databricks capabilities that improve reliability, performance, or developer productivity
- Validate new technologies with the Data Architect and operationalize them through documentation, examples, and enablement
- Implement automated data quality checks, rule enforcement, and exception handling
- Production support of both an existing and new platform including optimization of jobs, incident tracking and other analysis required for production
- Lead resolution of complex production issues and deliver durable root cause fixes
- Maintain SLAs for reliability, recovery, idempotency, performance, and cost efficiency
- Mentor Level 2–3 engineers through pairing, design guidance, code reviews, and technical coaching
Requirements:
- Bachelor's degree preferred; equivalent experience accepted
- 10+ years in data engineering (12+ without a degree)
- 4+ years building production-grade batch/streaming pipelines using PySpark, Spark Structured Streaming, Python, and SQL
- Proven experience with data governance, schema evolution, data lineage, and secure access patterns
- Proven 2 years' experience with maintaining and sustaining data pipelines
- 3+ years hands-on with Databricks (Delta Lake, DLT, Unity Catalog, workflow jobs) within the last 6 years
- Experience building metadata-driven or configuration-driven pipelines
- Experience with data quality frameworks (DQX, Great Expectations, or equivalent)
- Experience with observability, metrics and query performance analysis
- Strong Spark optimization