Definition of schemas and data models of scientific information required for the creation of value adding data products.
This includes accountability for the quality control and mapping specifications to be industrialized by data engineering and maintained in platform provisioned tooling.
Accountable for the quality control (through validation and verification) of mapping specifications to be industrialized by data engineering and maintained in platform provisioned tooling – e.g., models, schemas, controlled vocab.
Working with Product managers/engineers confidently convert business need into defined deliverable business requirements to enable the integration of large-scale biology data to predict, model, and stabilize therapeutically relevant protein complex and antigen conformations for drug and vaccine discovery.
Collaborate with external groups to align GSK data standards with industry/ academic ontologies ensuring that data standards are defined with usage/analytics in mind.
Support effective ingestion of data by GSK through understanding the entry requirements required by platform engineering teams and ensuring that the “barrier for entry” is met e.g. Scientific information has the appropriate metadata to be indexed, structured, integrated and standardised as needed.
Provides bespoke subject matter expertise for R&D data to translate deep science into data for actionable insights
Champion data lineage, data quality, and FAIR data principles across the Onyx platform, working with engineering and product teams to embed governance and quality frameworks into data pipelines
Contribute to and maintain documentation of data standards, ontology decisions, and mapping rationale to support organizational knowledge transfer and auditability
Support self-service data enablement by ensuring metadata and knowledge products are accessible, well-documented, and usable by scientists and analysts without requiring bespoke engineering support
Requirements
Masters degree in Bioinformatics, Biomedical Science, Biomedical Engineering, Molecular Biology, or Computer Science (with a life science application focus)
6+ years of relevant work experience
Experience contributing to Knowledge Graph development efforts, including entity modeling, relationship design, and schema governance
Experience in operating and leading across organizational boundaries a matrixed team
Experience with industry standard data management / metadata platforms e.g. Collibra, Datahub, Datum, Informatica
Proficiency in at least one programming language — preferably Python — for scripting vocabulary mappings, building data models, automating QC, and prototyping pipelines
Experience with bioinformatics pipelines and workflow management systems (e.g., Nextflow)
Tech Stack
Informatica
Python
Benefits
health care and other insurance benefits (for employee and family)