Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Sandbox Reading AssistantEPS Reading Assistant_Logo_Primary (1).pngImage Added

Overview of Reading Assistant Norms


Over the past four years, half a million students have read in excess of a billion words to Reading Assistant.  This vast pool of usage enables Reading Assistant’s norms to be grounded in huge samples, which allows cutlines and Percentile Rankings (PRs) to reflect the evolution of student mastery in the here and now. Reading Assistant generates norms for Science of Reading (SoR) metrics, Oral Reading Fluency (ORF), and Dyslexia Risk across grades K through 5 for each academic year window, including Fall, Winter, and Spring.

Reading Assistant norms enable the ranking and identification of students who may require additional assistance in classes and schools. These norms offer valuable insights to teachers, schools, and districts, providing a comprehensive understanding of the degree of change and growth window by window. Moreover, they are a valuable tool for implementing Response to Interventions (RTI).

To monitor anticipated student growth, Reading Assistant norms were meticulously developed to reflect sensible progress from one window to the next, emphasizing achieving a 50th Percentile Rank match. These norms are benchmarked against national standards, drawing references from established benchmarks like Hasbrouck-Tindal and Amplify's DIBELS.

Reading Assistant creates and utilizes three separate norming samples. Each is designed to enable the most reliable and valid generation of PRs for the designated metrics.

Reading Assistant’s National Norms For Science of Reading Metrics
Reading Assistant’s diagnostic metrics reflect Science of Reading (SoR) research and mirror Scarborough’s Reading Rope.

Across all Reports and Extracts, Reading Assistant generates scores and norms and assigns students PRs for the Reading Assistant Reading Mastery (ARM), Rapid Automatized Naming (RAN) number/color/object Speed, and the following sub-measures corresponding to the Reading Rope Thread: Decoding (Alphabetic Knowledge), Phonological Awareness, Vocabulary, and High-Frequency Words. 

Schools and districts were selected to be nationally representative in various dimensions, including school type (i.e., public, private, and charter), multilingual learners, socioeconomic status, gender, and ethnicity. The sample demonstrated a representative distribution across geographic regions, effectively encompassing all US Census Regions—West, Midwest, Northeast, and South. The total sample size was 799,924 assessments across Grades K through 5 in English, which was collected during the school year 2022–2023. The table below describes the features of the norming samples.
 

The Reading Assistant norms derived from this large, controlled sample are reflective of the national distribution of students against all target demographic factors as reported in the NCES.

Reading Assistant’s National Norms for Oral Reading Fluency
Reading Assistant’s Oral Reading Fluency (ORF) norms are based on the Hasbrouck & Tindal norms derived from 6.6 million ORF scores from students taking multiple assessments nationwide.
Hasbrouck and Tindal’s work to establish national ORF norms over 25 years is widely recognized and utilized. This research began in the nineties, was refreshed in 2006, and then was updated in 2017. Reading Assistant relies on Hasbrouck and Tindal’s 2017 norm data. The Hasbrouck and Tindal national norms were produced by aggregating ORF and Words Correct Per Minute (WCPM) from various commercial tests. Per the Hasbrouck & Tindal report (2017, p. 8):

New updated ORF norms were ultimately compiled from three assessments: DIBELS 6th edition© (using data from 2009–2010), and DIBELS Next© (using data from 2010–2011), both published by Dynamic Measurement Group and available from the UO DIBELS Data System within the University of Oregon Center on Teaching and Learning in the College of Education. We also included scores from the easyCBM© ORF assessment, published by Houghton Mifflin Harcourt Riverside, also available from the UO DIBELS Data System and easyCBM.com. The easyCBM© data were from the 2013–2014 school year. These new ORF data files were compiled from technical documents establishing norms specific to each assessment. Rather than raw scores from those three assessments, the three sets of assessment-specific norms were then averaged to compile this new set of ORF norms. The details of the methodology used to construct the three sets of norms used in this study were available in separate technical reports: DIBELS® 6th Edition in Cummings, Otterstedt, Kennedy, Baker, and Kame’enui (2011); DIBELS Next® in Cummings, Kennedy, Otterstedt, Baker, and Kame’enui (2011); and easyCBM© in Saven, Tindal, Irvin, Farley, and Alonzo (2014). All three reports have been published by the College of Education at the University of Oregon.

The total number of assessments utilized to produce the Hasbrouck and Tindal norms was 6,663,423. The breakdown of these assessments by grade, time of year, and source is shown in the table below, drawn directly from the Hasbrouck and Tindal technical report (2017, p. 9).

Reading Assistant’s National Norms for Dyslexia Risk
Reading Assistant’s national norms for Dyslexia Risk are derived from a large sample of students who have taken the Reading Assistant’s Benchmark Screener for the last two years.  As shown below, the sample to generate Dyslexia Risk norming encompasses students from between 127 and 276 distinct school districts.  All students in the sample completed the Benchmark Screener and received a score.
 

Grade

Assessments

Students

Districts

K

25,991

17,673

127

1

66,827

36,031

268

2

72,535

39,038

276

3

57,315

30,854

240

4

24,121

14,682

156

5

21,081

13,774

131

Total

267,870

152,052

340

The Dyslexia Risk Indicator (DRI) cutlines are based on long-standing research conducted by the University of Texas Health Science Center, the largest medical center in the world.  The norms inform the underlying statistics that establish the generation of a student’s DRI.  However, the DRI is, in large measure, a criterion-based measure.  So, norming only influences but does not dictate the basis for the Reading Assistant Dyslexia Risk categorization and index calculation.

The Dyslexia sample differs from the Diagnostic sample for two reasons:

  1. Some students only take the Screener, and others only take the ORF.

  2. To create a properly stratified sample, different students had to be included and excluded from the two norming populations.

Reading Assistant’s norms provide a rigorous foundation for benchmarking students nationwide.  All norms within Reading Assistant are:

  • based on a large sample size (N > 700,000);

  • carefully constructed to represent national demographics;

  • extracted from a broad and representative set of regions and districts;

  • constructed by experienced psychometricians and data scientists.

References
Hasbrouck, J., & Tindal, G. (2017). An update to compiled ORF norms (No. 1702). Technical report.


Support