Payscale is a compensation innovator helping organizations scale their business with pay. They are seeking an early-career Machine Learning Engineer to optimize and deploy AI/ML models into production environments, working closely with experienced engineers and data scientists.
Responsibilities:
- Partner with Data Science to package models for deployment and integrate them into our products and internal services
- Implement and improve ML deployment and inference workflows (batch and/or real-time), including automation and CI/CD patterns with guidance from senior engineers
- Build and maintain API endpoints or services that expose model predictions, including input validation, error handling, and documentation
- Write tests (unit/performance/integration) to validate model behavior and service reliability; help create repeatable validation checks and release processes
- Instrument services with logging/metrics and help monitor production behavior; participate in incident triage and troubleshooting with support from the team
- Contribute to performance and cost improvements through profiling and practical techniques like batching, basic caching, and efficiency-minded design
- Stay current on relevant AI/ML engineering best practices and share learnings with the team
Requirements:
- Bachelor's or master's degree in Computer Science, Engineering, or related field
- 1+ years of experience (including internships/co-ops) building software in a production environment
- Proficiency in Python with a focus on readable, testable code
- Familiarity with core ML concepts and at least one ML framework (e.g., PyTorch, TensorFlow, scikit-learn)
- Familiarity with building or consuming APIs (HTTP/JSON) and basic service development patterns
- Comfort working in a collaborative environment: asking questions, communicating tradeoffs, and incorporating feedback
- Willingness to learn cloud, containerization, and MLOps practices as part of day-to-day work
- Exposure to MLOps tools or patterns (e.g., MLflow, Airflow, Kubeflow, feature stores, model registries)
- Experience with containers (Docker) and/or orchestration (Kubernetes)
- Experience with observability tools (e.g., Datadog, Prometheus/Grafana) and production troubleshooting
- Basic performance tuning experience (profiling, async patterns, caching concepts)
- Experience working with data platforms (e.g., Snowflake, Spark) or large-scale data pipelines