Amira Learning is an edtech company focused on accelerating literacy outcomes through AI. They are seeking a Data Scientist to design and validate statistical models that support their AI-powered literacy platform, collaborating closely with engineers and product managers to ensure measurement systems are sound and effective.
Responsibilities:
- Design and validate models underlying adaptive assessment systems, automated scoring pipelines, and real-time diagnostic feedback
- Develop and maintain automation pipelines for evaluating the impact of system changes on downstream score distributions and student classifications
- Apply and extend a variety of state-of-the-art statistical models and approaches to estimate student performance, growth, and score trajectories
- Develop simulations to evaluate and validate assessment design decisions
- Conduct rigorous validity and reliability analyses on data from early reading/literacy assessments
- Partner with ML engineers to design experiments validating AI scoring models, including automated speech recognition, NLP-based scoring, and adaptive algorithm performance
- Build AI-powered data pipelines and analytical tooling to monitor score quality, flag anomalies, and support continuous improvement of assessment models
- Use AI-assisted development tools — including Cursor, Claude Code, and similar platforms — as core parts of your daily workflow; comfort and enthusiasm for these tools is essential
- Develop and validate norm-referenced and criterion-referenced score reporting frameworks grounded in statistical best practices
- Conduct linking, equating, and comparability studies to ensure consistent score interpretation across years, cohorts, and assessment variants
- Translate complex statistical methodology and results into clear, compelling narratives for non-technical audiences including school district leaders and state procurement teams
- Contribute to technical reports, white papers, and RFP responses demonstrating statistical rigor and validity evidence
- Support ongoing research addressing the unique challenges of AI-powered formative literacy assessment at scale
- Oversee data collection frameworks and longitudinal analytical designs that support ongoing model improvement
- Monitor assessment system performance across diverse student populations, and produce solutions to alleviate issues of fairness and equity
- Collaborate cross-functionally to deliver analytical insights that directly inform product decisions and instructional recommendations