Applied Researcher I – AI Foundations, LLM Customization, Finetuning, Reinforcement Learning
McLean, New York, United States of America
Full Time
5 hours ago
$218,700 - $272,300 USD
Visa Sponsor
Key skills
AWSOpen SourcePyTorchAIMachine LearningDeep LearningNLPLLMLarge Language Models
About this role
Role Overview
Partner with a cross-functional team of data scientists, software engineers, machine learning engineers and product managers to deliver AI-powered products that change how customers interact with their money.
Leverage a broad stack of technologies — Pytorch, AWS Ultraclusters, Huggingface, Lightning, VectorDBs, and more — to reveal the insights hidden within huge volumes of numeric and textual data.
Build AI foundation models through all phases of development, from design through training, evaluation, validation, and implementation.
Engage in high impact applied research to take the latest AI developments and push them into the next generation of customer experiences.
Flex your interpersonal skills to translate the complexity of your work into tangible business goals.
Requirements
Currently has, or is in the process of obtaining, a PhD in Electrical Engineering, Computer Engineering, Computer Science, AI, Mathematics, or related fields, with an exception that required degree will be obtained on or before the scheduled start date or M.S. in Electrical Engineering, Computer Engineering, Computer Science, AI, Mathematics, or related fields plus 2 years of experience in Applied Research
PhD in Computer Science, Machine Learning, Computer Engineering, Applied Mathematics, Electrical Engineering or related fields
LLM PhD focus on NLP or Masters with 5 years of industrial NLP research experience
Multiple publications on topics related to the pre-training of large language models (e.g. technical reports of pre-trained LLMs, SSL techniques, model pre-training optimization)
Member of team that has trained a large language model from scratch (10B + parameters, 500B+ tokens)
Publications in deep learning theory
Publications at ACL, NAACL and EMNLP, Neurips, ICML or ICLR
Finetuning PhD focused on topics related to guiding LLMs with further tasks (Supervised Finetuning, Instruction-Tuning, Dialogue-Finetuning, Parameter Tuning)
Demonstrated knowledge of principles of transfer learning, model adaptation and model guidance
Experience deploying a fine-tuned large language model
Data Preparation
Publications studying tokenization, data quality, dataset curation, or labeling
Contribution to a major open source corpus
Contribution to open source libraries for data quality, dataset curation, or labeling
Tech Stack
AWS
Open Source
PyTorch
Benefits
Comprehensive, competitive, and inclusive set of health, financial and other benefits that support total well-being.