Utilize cutting edge machine learning methods to develop algorithms for personalization, recommendation, and other predictive systems; maintain algorithms deployed to production and be the point person in explaining methodologies to technical and non-technical teams
Develop and maintain ETL pipelines using orchestration tools such as Airflow and Jenkins; deploy scalable streaming and batch data pipelines to support petabyte scale datasets
Maintain existing and establish new algorithm development, testing, and deployment standards
Identify and define new personalization opportunities and work with other data teams to improve how we do data collection, experimentation and analysis
Requirements
5+ years of experience developing machine learning models, performing large-scale data analysis, and/or data engineering experience
5+ years writing production-level, scalable code (Python, SQL)
3+ years of experience developing algorithms for deployment to production systems
In-depth understanding of modern machine learning (e.g. deep learning methods), models, and their mathematical underpinnings
Experience deploying and maintaining pipelines and in engineering big-data solutions using technologies like Databricks, S3, and Spark
Ability to gauge the complexity of machine learning problems and a willingness to execute simple approaches for quick, effective solutions as appropriate
Strong written and verbal communication skills
Bachelor’s Degree in Computer Science, Math, Statistics, or related quantitative field
Tech Stack
Airflow
ETL
Jenkins
Python
Spark
SQL
Benefits
A bonus and/or long-term incentive units may be provided as part of the compensation package
Full range of medical, financial, and/or other benefits, dependent on the level and position offered