CVS Health is looking for a skilled Senior Data Engineer to join their DevOps team. This role involves integrating security into the software development lifecycle and collaborating with development and IT operations teams to implement automation using the latest AI technologies.
Responsibilities:
- 5+ years of coding experience using Python with expertise in solution and software development
- 5+ years in DevOps/MLOps and a deep understanding of software engineering principles
- 5+ years of experience in TensorFlow and PyTorch, with additional cloud platform familiarity (AWS highly preferred)
- 5+ years of experience with cloud computing environment and management of resources
- Proven ability to manage the production lifecycle of an MLOps pipeline
- Deep understanding of model deployment and serving
- Experience with orchestration and containerization methodologies
- Familiarity with model serving tools such as MLflow, Databricks Serving, SageMaker, etc. (TensorFlow Serving is a plus)
- Experience implementing CI/CD pipelines for ML projects
- Competence in developing automation pipelines (e.g., using GitHub Actions)
- Proficiency in Infrastructure as Code tools (e.g., Terraform, Databricks Asset Bundles,etc)
- Strong understanding of monitoring and observability, including logging tools (e.g., Prometheus, Grafana) and drift detection
- Extensive experience with Databricks and other development tools in cloud
Requirements:
- 5+ years of coding experience using Python with expertise in solution and software development
- 5+ years in DevOps/MLOps and a deep understanding of software engineering principles
- 5+ years of experience in TensorFlow and PyTorch, with additional cloud platform familiarity (AWS highly preferred)
- 5+ years of experience with cloud computing environment and management of resources
- Bachelor degree from accredited university or equivalent work experience (HS diploma + 4 years relevant experience)
- Proven ability to manage the production lifecycle of an MLOps pipeline
- Deep understanding of model deployment and serving
- Experience with orchestration and containerization methodologies
- Familiarity with model serving tools such as MLflow, Databricks Serving, SageMaker, etc. (TensorFlow Serving is a plus)
- Experience implementing CI/CD pipelines for ML projects
- Competence in developing automation pipelines (e.g., using GitHub Actions)
- Proficiency in Infrastructure as Code tools (e.g., Terraform, Databricks Asset Bundles,etc)
- Strong understanding of monitoring and observability, including logging tools (e.g., Prometheus, Grafana) and drift detection
- Extensive experience with Databricks and other development tools in cloud