Sennos is rapidly emerging as the global leader in AI-driven sensing, analytics, and control for the Fluid, Fermentation, and Bio-manufacturing industries. The Data Engineer role focuses on building and maintaining data pipelines and supporting the development of Sennos' modern data platform, while collaborating with various teams to ensure data quality and integrity.
Responsibilities:
- Build and maintain ETL/ELT pipelines using SQL and Python under the guidance of senior data engineering leadership
- Develop and maintain transformations using dbt or similar tools within a Snowflake-based warehouse
- Create and optimize datasets and views to support analytics, reporting, machine learning, and product feature development
- Manage ad hoc data requests with accuracy and efficiency while maintaining data integrity and consistency
- Implement and maintain data quality checks, validation rules, and testing processes to ensure reliability and trust in warehouse data
- Support the enforcement of data contracts between source systems and the warehouse
- Assist in reverse ETL workflows to operationalize warehouse data into downstream systems
- Contribute to ML data preparation and feature pipeline workflows
- Collaborate closely with Data Architecture, Analytics Engineering, Product, and Software Engineering teams
- Contribute to documentation, governance practices, and continuous improvement of data engineering standards
Requirements:
- Bachelor's degree in Computer Science, Data Science, Engineering, or related field (or equivalent years of professional experience)
- 2–4 years of experience in data engineering or a related data-focused role
- Experience working with ETL/ELT processes and structured warehouse data
- Exposure to cloud-based data platforms (AWS preferred)
- Strong SQL skills (joins, window functions, and query optimization fundamentals)
- Proficiency in Python for data processing, scripting, or automation
- Familiarity with version control systems (e.g., Git)
- Strong attention to detail and commitment to data accuracy
- Ability to troubleshoot and debug data workflows effectively
- Strong written and verbal communication skills
- Ability to collaborate across technical and non-technical teams
- Experience working with Snowflake or similar cloud data warehouses
- Exposure to dbt or similar transformation frameworks
- Introductory experience with dimensional modeling concepts
- Experience implementing data quality tests or validation frameworks
- Exposure to data contracts or schema management practices
- Familiarity with reverse ETL concepts
- Passing experience with workflow orchestration tools (e.g., Airflow, Dagster, or similar)
- Familiarity with CI/CD practices for data workflows
- Experience using AI-assisted tools to support debugging, pipeline development, or data engineering workflows
- Exposure to BI tools (e.g., Power BI, Tableau, Looker)