Collaborate with scientists, assay teams, and lab operations to capture end-to-end assay and experimental workflows, from sample onboarding and execution through data ingestion, validation, and downstream analytics
Translate scientific and operational requirements into well-defined functional, technical, and data requirements for laboratory platforms, system integrations, and next-generation data pipelines
Design, develop, and maintain Python-based backend services, APIs, and data pipelines on AWS
Build backend services using FastAPI and supporting frameworks such as Flask or Django
Develop and maintain RESTful APIs and microservices for integration with scientific systems including Benchling, LIMS, ELN, CDS, and SDMS
Design and optimize SQL and NoSQL data models to support structured, semi-structured, and high-volume scientific data
Build and support ETL/ELT and next-generation data pipelines for analytics and AI/ML workloads
Support AI/ML use cases by preparing datasets, enabling feature engineering, and integrating models into pipelines and applications
Implement and maintain CI/CD pipelines for automated build, testing and deployment
Apply Test-Driven Development (TDD) practices and develop automated unit, integration, and data validation tests
Ensure solutions meet performance, data integrity, security, and regulatory compliance requirements (e.g., GxP, 21 CFR Part 11)
Perform code reviews, debugging, and performance optimization
Coordinate across cross-functional and geographically distributed teams, managing dependencies and ensuring delivery alignment
Create ready to deliver technical documentation and track deliverables using JIRA and Confluence
Requirements
Bachelor’s or master’s degree in computer science, Engineering, Life Sciences with 2–5 years of hands-on experience in Python development with FastAPI (Flask or Django is a plus)
Proficiency in SQL, including schema design, complex queries, and performance optimization
Solid understanding of REST APIs, microservices, and integration patterns
AWS experience, including S3, EC2, Lambda, Step Functions, RDS / Aurora, IAM, monitoring, and logging
Proficiency with Git-based collaborative development, including branch management, pull requests, code reviews, and integration with CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins, AWS CodePipeline) to ensure reliable and traceable software delivery
Hands-on experience with Test-Driven Development and Python testing frameworks such as pytest, unittest, and mocking libraries
Working knowledge of AI/ML concepts, including data preparation, feature engineering, model integration, and inference workflows
Exposure to the data and ML libraries such as pandas, NumPy, and scikit-learn (exposure to TensorFlow or PyTorch is a plus)
Exposure to life sciences, biotech, pharma, or healthcare domains and scientific platforms such as LIMS, ELN, SDMS, CDS, or data lakes
Ability to design data models aligned to scientific and assay workflows & integrating scientific or enterprise systems and working directly with scientists or lab users
Knowledge of containerization (Docker) and modern deployment best practices
Familiarity with Agile/Scrum & SDLC development methodologies
Strong communication, stakeholder engagement, and cross-team coordination skills.
Tech Stack
AWS
Django
Docker
EC2
ETL
Flask
Jenkins
Microservices
NoSQL
Numpy
Pandas
Python
PyTorch
Scikit-Learn
SDLC
SQL
Tensorflow
Benefits
CURIOSITY DRIVEN, SCIENCE FOCUSED, EMPLOYEE BUILT. Our culture is unlike any other, one where we debate, challenge ourselves, and interact with all alike.
We are a curious bunch, characterized by our passion to learn and spirit of teamwork.
We offer a competitive compensation package including accrued vacation, medical, dental, vision, 401k with company matching, life insurance, and flexible spending accounts.
Zifo is an equal opportunity employer, and we value diversity at our company.