Stryker is a company that delivers innovative products and services in MedSurg, Neurotechnology, Orthopaedics, and Spine. They are seeking a Lead Data Engineer to play a key role in data projects across the Instruments division, leading the design, development, testing, and monitoring of data pipelines while collaborating with analysts and data scientists.
Responsibilities:
- Translate business requirements into robust & performant data pipelines with appropriate ETL/ELT and monitoring/logging steps for a wide variety of data sources, such as Databricks Unity Catalog, SAP HANA, Microsoft SSAS, Azure SQL Server, Oracle, Parquet, APIs, Excel & CSVs
- Lead and mentor others in advanced troubleshooting and root-cause analysis
- Independently perform and document requirements, business logic, impact assessment, and technical documentation to maintain and troubleshoot key systems and data assets
- Lead discussions with cross-divisional Stryker peers to collaboratively harness domain expertise and emerging capabilities to drive impact
- Provide helpful insights in discussions to identify opportunities in data architecture and data movement that enable business opportunities with key stakeholders
- Lead and deliver presentations and communications that build data engineering credibility and rapport to a medium-sized group, with some guidance
Requirements:
- Bachelor's degree in computer science, data analytics, mathematics, statistics, data science, or related field, or 6 years of additional data engineering work experience in lieu of degree
- Minimum of 4 years' experience in data engineering or related technical role
- Experience in building and managing end‑to‑end data engineering solutions, from problem definition and requirements through architecture, deployment, and maintenance
- Experience in SQL and proficiency in at least one programming language central to data engineering (e.g., Python, Powershell, R, Spark)
- Experience with ETL/ELT processes, optimizing and maintaining data pipelines, and data warehouse architecture
- Experience with DataOps, DevOps, and infrastructure-as-code practices using GitHub/GitLab
- Experience using cloud‑based data platforms or distributed computing technologies
- Experience performing code reviews, enforcing quality standards, and deploying improvements to production environments
- Master's degree
- Experience with Microsoft data services, including Power BI, Microsoft Fabric, Azure Data Factory, Azure SQL Server, and Databricks
- Experience with database administration
- Experience with a data-driven reporting platform, such as Power BI, Tableau, or SSRS