TIDI Products is a market leading manufacturing of disposable infection prevention products and patient safety products, headquartered in Neenah, WI. They are seeking a Data Engineer to collaborate in the design, development, and implementation of scalable data solutions within their cloud-based data platform and Lakehouse architecture, supporting enterprise reporting and advanced analytics initiatives.
Responsibilities:
- Collaborate in the design, development, and implementation of scalable end-to-end data solutions across modern cloud data platforms
- Develop and maintain Lakehouse, warehouse, and data pipeline solutions supporting enterprise analytics, reporting, and AI initiatives
- Build and optimize scalable data pipelines and ETL/ELT processes ingesting structured and unstructured data from internal and external sources
- Design and build scalable data pipelines incorporating data quality controls and entity de-duplication
- Contribute to improving the performance, scalability, and efficiency of data pipelines and distributed processing workloads
- Contribute to the development and refinement of scalable data models and storage architecture
- Support the advancement of a centralized, high-quality data environment (“single source of truth”)
- Ensure data platforms are structured and governed to support advanced analytics and AI use cases
- Monitor, troubleshoot, and optimize data workflows for performance, reliability, and cost efficiency
- Translate complex business requirements into well-structured technical solutions
- Uphold best practices for data governance, validation, security, compliance, and documentation
- Collaborate with data scientists, analysts, and software engineers to enable advanced analytics initiatives
- Identify and recommend improvements to tools, processes, and overall platform performance
Requirements:
- Bachelor's degree in Computer Science, Engineering, Information Systems, or related field (or equivalent experience)
- 5-8 years of experience in data engineering, analytics engineering, or related technical field
- Experience integrating and modernizing multi-source enterprise data including ERP systems
- Experience working within modern cloud-based data platforms or Lakehouse architecture
- Proven experience designing and optimizing scalable data pipelines and ETL/ELT processes
- Strong experience with Spark (PySpark) and distributed data processing, with advanced SQL proficiency and strong understanding of data modeling pipelines
- Experience working with structured and unstructured data sources
- Working knowledge of data governance, security, and compliance best practices
- Strong analytical, troubleshooting, and communication skills
- Ability to operate independently while contributing to broader technical initiatives
- Familiarity with data preparation and architecture considerations that support AI and machine learning workloads
- Certification or hands-on experience with Microsoft Fabric and/or Databricks