Designs, builds, and operates tools, services, workflows, etc that deliver high value through the solution to key business problems
Responsible for development of key components of a hybrid on-prem/cloud compute platform for both interactive and scalable batch computing and establishing of processes and workflows to transition existing HPC users and teams to this platform
Responsible for code-driven environment, applications, and container/image builds as well as CI/CD driven application deployments.
Consult science users on application scalability to PBs of data by having a deep understanding of software engineering, algorithms, and underlying hardware infrastructure and their impact on performance.
Confidently optimizes design and execution of complex solutions within large-scale distributed computing environments
Produces well-engineered software, including appropriate automated test suites, technical documentation, and operational strategy
Ensure consistent application of platform abstractions to ensure quality and consistency with respect to logging and lineage
Fully versed in coding best practices and ways of working, and participates in code reviews and partnering to improve the team’s standards
Adhere to QMS framework and CI/CD best practices and helps to guide improvements to them that improve ways of working
Provide leadership to team members to help others get the job done right
Requirements
Bachelor's degree in data engineering, Computer Science, Software Engineering or related discipline
6+ years of professional experience
Experience with Python
Experience with Cloud
Experience with High Performance Compute (HPC)
Tech Stack
Cloud
Python
Benefits
health care and other insurance benefits (for employee and family)