Which evidence-based teaching practice is matched with the correct component of comprehension?

Which evidence-based teaching practice is matched with the correct component of comprehension?
Add to favorites

Written by Una Malcolm, M.A., OCT

Introduction: Why Assess

Assessment for learning plays a critical role in informing and driving literacy instruction. Nearly all students are capable of learning to read; while specific numbers vary, estimates indicate that approximately 95% of students are capable of learning to read (Fletcher & Vaughn, 2009). Rigorous, systematic, and explicit instruction that addresses the essential components of literacy including phonemic awareness, phonics, fluency, vocabulary and comprehension (National Reading Panel, 2000), is key to allowing all children to reach their full potential. This instruction often referred to as structured literacy, must go hand-in-hand with a comprehensive assessment system to allow educators to adjust instruction to meet the specific needs of students (Spear-Swerling, 2018). Assessment is critical for students with learning disabilities (LDs) in terms of both the prevention of reading difficulties and intervention to remediate skill gaps.

Formative Assessment: Answering Questions

Assessment for learning plays a key role in meeting the needs of all students through the creation of equitable learning environments. Assessment for learning, “is used in making decisions that affect teaching and learning in the short-term future” (Growing Success, 2010, p. 30). Multiple research reviews and meta-analyses have identified that systematic formative assessment has consistent strong, positive effects on student learning (Fuchs & Fuchs, 1986; Black & Wiliam, 2005; Hattie, 2009) across a variety of factors including student age, measurement frequency, and disability status. In essence, assessment for learning allows educators to identify where students are and inform plans to get students where they need to be.

Even within assessment for learning, multiple types of assessments exist. Growing Success (2010) distinguishes between formative and diagnostic assessments but emphasizes that “what matters is how the information is used” (p. 31). Assessment answers questions; different types of assessment tools answer different types of questions. In a Response to Intervention model, assessment purposes include universal screening, diagnostic assessments, and progress monitoring; all three of these purposes for assessment answer key questions about shaping instructional environments to meet students’ needs.

1. Screening - Assessment for Learning

The development of skilled reading is a complex process that hinges on the development and the integration of cognitive and language skills. Many of these skills can be observed and measured well before children receive any formal reading instruction (Catts, 2015). With data indicating that reading trajectories are established early and are very stable (Juel, 1988), it is critical to identify reading difficulties early on in a student’s academic career to inform support. Since reading remediation at an earlier age is more effective than later intervention (Lovett et al., 2017), screening plays an important role in preventing early reading difficulties from being exacerbated over time.

All students should be screened with universal screening measures three times per year (Good et al., 2011). These assessments are very brief, standardized measures that quickly and efficiently identify which children are at risk. Standardized assessments are tests that have specific procedures for administration and scoring, perhaps with scripted directions and specific scoring guidelines (Farrall, 2012). Students are assessed on several indicators of early literacy skills appropriate for their age and grade. For example, Kindergarten and grade one students may be screened on phonemic awareness, phonics, and decoding, while older students may be screened on oral reading fluency (ORF) and retell for comprehension.

Universal screening measures are widely available and very accessible. Many of the universal screeners currently available are either low-cost or free. Often there can be a fee for training or optional computer data analysis systems, but many of the testing materials are very accessible.

These assessments are technically reliable and valid, and scores are interpreted in relation to a benchmark criterion. Benchmark cutscores are not determined arbitrarily but instead are derived from research; researchers use longitudinal studies to determine what threshold a child must reach to have a strong chance of meeting future reading goals. This means a child’s score allows educators to predict future reading performance with a reasonable degree of accuracy. Students who fall at or below the cutscore are at risk and will likely need additional support to allow them to meet future goals.  For example, a benchmark cutscore on Acadience Reading assessments is the score needed for a child to have an 80 - 90% probability of meeting successive reading goals (Good et al., 2011). In interpreting a child’s score in comparison to a benchmark score, educators can quickly and efficiently assess the level of risk of future reading achievement or difficulty before difficulties emerge or worsen.

Screening allows educators to quickly and easily catch students who are at risk. Screening does not, however, give teachers a full understanding of what the underlying cause of a problem is. Screening is akin to a blood pressure or temperature check at a doctor’s office; these are simple, efficient and cost-effective ways to find problems, but abnormalities on these screeners require additional diagnostic testing to decipher what exactly is wrong.

2. Diagnostic Assessment - Assessment for Learning

Diagnostic assessments allow educators to probe deeper into the learning profiles of students in question. For many students, universal screening three times per year provides all the assessment data necessary to ensure students are making sufficient progress. For students who are at risk, though, careful analysis of reading subskills must occur to differentiate instruction and plan intervention. These assessments allow educators to ascertain which specific skills or knowledge a student has mastered, and which skills or knowledge need to be taught.

If universal screeners are similar to a blood pressure or temperature check to identify problems in a brief and cost-effective manner, diagnostic assessments are similar to more comprehensive blood tests or diagnostic imaging to dig deeper into a problem. The information from these more investigative approaches directly informs treatment, similar to how diagnostic reading assessments allow teachers to differentiate classroom instruction or plan targeted interventions.

Diagnostic assessments are longer and more in-depth than screening assessments and are typically not standardized. While screeners are quick assessments of indicators of basic skills, diagnostic assessments can be thought of as inventories of skills that directly inform instruction. Can a child segment two, three, or four phoneme words? Can a child decode words with short vowels or consonant digraphs? Diagnostic assessments allow teachers to differentiate classroom instruction and plan intervention if necessary.

Diagnostic assessments should be selected strategically. The Simple View of Reading (Gough & Tunmer, 1986) indicates that reading comprehension, the end goal of reading, is a product of two sets of subskills, word recognition and language comprehension. When selecting diagnostic assessments, teachers can probe both areas of this model to clarify a student’s specific patterns of strengths and needs. When weaknesses exist in reading comprehension, is this due to difficulties with word recognition, with language comprehension, or with both?

3. Progress Monitoring - Assessment for Learning

With screening and diagnostic data in hand, teachers can differentiate core instruction and/or provide targeted intervention through intervention supports. Progress monitoring assessments allow educators to quickly measure a student’s response to instruction. Instead of waiting until the next benchmark assessment several months in the future, progress monitoring allows for more rapid adjustment of instruction. While universal screening measures students’ skill level with grade-level material, progress monitoring is at an individual child’s instructional level to directly measure response to instruction. A grade four child reading at a grade one level would be screened at the beginning, middle, and end of the year with grade four assessments. Progress monitoring assessment, though, would be at a grade one level in order to measure the child’s response to intervention.

Many universal screeners also supply progress monitoring materials. These materials are often very similar to universal screening measures in that they are brief and standardized. When selecting an assessment for progress monitoring, it is important to consider practice effects. Progress monitoring assessments typically have multiple forms so students are not tested with the same material. Progress monitoring data are usually graphed so student growth can be compared visually to an aimline, or the line between a child’s current score and the benchmark goal.

Which evidence-based teaching practice is matched with the correct component of comprehension?

Progress monitoring assessments are purposefully brief. Depending on student age and needs, this assessment may take between one and three minutes. Struggling students need to catch up quickly; Torgesen (2004) describes the “devastating downward spiral” that occurs when students do not receive timely, evidence-based intervention. Struggling readers do not have time for a “wait and see” approach. Weekly or biweekly progress monitoring allows for enough data to quickly and clearly establish patterns. Educators can quickly judge if a child’s rate of progress is strong enough to meet the goal. If not, there are several options that can be considered to intensify instruction:

  • Increasing frequency or length of support
  • Decreasing group size
  • Making groups more homogeneous
  • Increasing level of explicitness
  • Increasing opportunities for practice

To see an example of how the Thunder Bay Catholic DSB uses tier 1, 2 and 3 reading interventions, click here to view the LD@school video The Tiered Approach

In addition to informing instruction, progress monitoring data allow for insight into students learning profiles. Research indicates that data from a child’s response to intervention can give important insight into whether a child may have a learning disability. Patterns of inadequate response to evidence-based instruction, established through progress monitoring data, are key factors to consider when evaluating for a learning disability (Miciak & Fletcher, 2020; Catts & Petscher, 2021).

4. Outcome Evaluation - Assessment of Learning

Outcome assessments measure student achievement. While progress monitoring assessments show if instruction is working, outcome assessments are assessments of learning and identify if instruction worked.

These assessments are typically comprehensive assessments that measure student mastery of provincial curriculum expectations or the specific learning goals outlined. Outcome assessments often occur at the end of a period of learning, including a unit of study, a term, or a school year, for example.

Evidence-Based Assessment in the Science of Reading: Cheat sheet - Comparing and Contrasting Assessment Purposes

Which evidence-based teaching practice is matched with the correct component of comprehension?

Click here to view and download the Evidence-Based Assessment in the Science of Reading: Cheat sheet

Assessment Data and Differentiating Tier 1 Instruction

Patterns in universal screening data can support teachers in differentiating classroom instruction. Instead of using student reading levels from a running record to group students for small group instruction, universal screening data allows educators to use reading skills to more specifically build homogeneous groups of students for instruction. A grade 2 teacher, for example, could split students into several groups based on their universal screening data:

  • Fluent and accurate readers needing continued content and vocabulary instruction to build comprehension
  • Students who read accurately but need fluency instruction
  • Students in need of word-level decoding and phonics instruction.

Data from screeners as well as diagnostic testing for deeper probing allows for more sophisticated and specific differentiation for intervention, instruction, or enrichment since it focuses on specific skills instead of generic reading levels.

Student vs. System Analysis to Strengthen Universal Instruction

One of the strengths of universal screening is it allows educators to analyze not only individual students’ growth but also the strength of classroom core instruction. If more than 20% of students are below the benchmark on universal screening, it is a clear indication that efforts to strengthen core instruction should be made. Prioritizing intervention when a large proportion of the class is at risk misses out on a valuable opportunity to strengthen the universal tier of classroom instruction. With class-wide screening data, teachers are able to better differentiate instruction and potentially implement class-wide interventions, such as Peer-Assisted Learning Strategies (McMaster & Fuchs, 2016). For example, a grade 3 teacher could implement a paired repeated reading instructional routine (Stevens et al., 2017) in response to weak fluency screening data. Instead of just problem-solving at individual student levels, teachers have the data necessary to develop action plans to strengthen core instruction for all students.

To learn more about Peer Assisted Learning, click here to access LD@school’s article and video Peer-Mediated Learning Approaches.

Oral Reading Fluency and Running Records: Frequently Confused

Oral Reading Fluency assessments are commonly used in both screening and progress monitoring for developing readers. While individual assessments vary, generally Oral Reading Fluency (ORF) measures involve timed passage oral reading for one minute. Accuracy rates and words correct per minute scores are calculated and interpreted in relation to a benchmark score. ORF assessments are highly correlated with reading comprehension (Fuchs et al., 2001) and serve as a proxy for this critical outcome skill.

ORF assessments are often confused with running record assessments that are commonly used in literacy programs. While both assessments do involve the reading of connected passages, running records differ in that a text “level” is often indicated as a student’s instructional or independent level based on the accuracy percentage. Fluency is typically assessed through teacher observation or rubrics. Running records do not meet desired characteristics for the selection of a screener. They do not act as brief, reliable and valid indicators of key early literacy skills, and they do not allow for interpretation of scores in relation to a benchmark criterion derived from predictive probabilities. In fact, Parker et al. (2015) found that a commonly used running record assessment had a .54 correlation with a district-provided criterion reading test. These data indicate that running records were only 54% accurate in predicting student reading performance, in comparison to the established technical validity and reliability of ORF measures.

Assessment for Learning: A Case Study

The following case study provides an example of how reading assessment can be used in the classroom to identify at-risk students, pinpoint the skills these students struggle with, and determine the appropriate intervention to get students back on track. 

Maya is in grade two. As a part of the beginning-of-year screening process, her teacher administers a brief screening assessment that yields this data about Maya’s reading:

Nonsense Word Fluency:

  • Correct Letter Sounds (phonics): Below Benchmark
  • Whole Words Read (decoding): Well Below Benchmark

Oral Reading Fluency:

  • Accuracy (passage reading accuracy): Below Benchmark
  • Words Correct (passage reading fluency): Well Below Benchmark

Maya’s teacher, Ms. Kaur, recognizes that Maya is struggling and is not on track to meet future reading goals. Ms. Kaur realizes that Maya is having difficulty reading the ORF passage accurately and smoothly since she is not an accurate and fluent word reader. Based on these screening data, she decides to dig deeper with some diagnostic assessments to help her differentiate instruction. She decides to give Maya a decoding assessment to understand which specific phonics patterns she has mastered, and which patterns she needs to be taught. She learns that Maya hasn’t yet mastered short vowel sounds. She also uses a phonemic awareness assessment to check her blending and segmenting, which shows skill gaps as well. Ms. Kaur realizes that Maya is having difficulty with word recognition skills. To check on her oral language skills, Ms. Kaur reads a story to Maya and asks her oral comprehension questions. Maya is easily able to answer questions about what she hears, indicating to Ms. Kaur that Maya’s reading difficulties appear to be stemming from decoding and word recognition difficulties and not language comprehension difficulties.

Armed with this information, Ms. Kaur builds her small reading groups. Instead of using reading levels to group students, Ms. Kaur uses reading skills. She groups Maya with other students needing phonemic awareness and short vowel instruction.

As Ms. Kaur uses this screening and diagnostic data to plan her groups and instruction, she also continues to monitor Maya’s progress using weekly Nonsense Word Fluency assessments. Since these progress monitoring assessments are one minute each, Ms. Kaur knows that this is a quick and efficient way to make sure Maya is responding to her instruction without having to wait until the mid-year screening time. If Maya is not making sufficient progress toward the benchmark score, she may need instruction to be intensified to allow her to reach the benchmark goal. Ms. Kaur graphs Maya’s progress monitoring data, as well as the aimline which shows the path between Maya’s current score and the benchmark. Ms. Kaur and her grade-level team can quickly and easily judge if Maya is progressing at a fast enough rate to close the gap.

Which evidence-based teaching practice is matched with the correct component of comprehension?

About the author:

Una Malcolm is a doctoral student in Reading Science and also has a Master’s degree in Child Study and Education. She is a member of the Ontario College of Teachers. Una has trained extensively in several evidence-based teaching methods, such as Direct Instruction, Phono-Graphix and Lindamood-Bell. She is an Acadience Reading K-6 (formerly DIBELS Next) Mentor.

References

Black, P. & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7 - 74. doi: 10.1080/0969595980050102

Catts, H.W., Nielsen, D. C., Bridges, M. S., Liu, Y. S., & Bontempo, D. E. (2015). Early identification of reading disabilities within an RTI framework. Journal of Learning Disabilities, 48(3), 281–297.

Catts, H.W., Petscher, Y. (2021). A cumulative risk and resilience model of dyslexia. Journal of Learning Disabilities, doi.org/10.1177/00222194211037062

Eunice Kennedy Shriver National Institute of Child Health and Human Development, NIH, DHHS. (2000). Report of the National Reading Panel: Teaching Children to Read: Reports of the Subgroups. U.S. Government Printing Office.

Farrall, M. L. (2012). Reading assessment: Linking language, literacy, and cognition. John Wiley & Sons, Inc.. https://doi.org/10.1002/9781118092668

Fuchs, L.S. & Fuchs, D. (1986). Effects of systematic formative evaluation: A meta-analysis. Exceptional Children, 53(3), 199 - 208.

Fuchs, L., Fuchs, D., Hosp, M. & Jenkins, J. (2001). Oral reading fluency as an indicator of reading competence. Scientific Studies of Reading, 5(3), 239 - 256.

Fletcher, J.M. & Vaughn, S. (2009). Response to intervention: Preventing and remediating academic difficulties. Child Development Perspectives, 3(1), 30 - 37. doi: 10.1111/j.1750-8606.2008.00072.x

Good, R.H., Kaminski, R.A., Cummings, K.D., Dufour-Martel, C., Petersen, K., Powell-Smith, K.A., Stollar, S. & Wallin, J. (2011). Acadience Reading K-6 Assessment Manual. Acadience Learning Inc.

Gough, P.B. & Tunmer, W.E. (1986). Decoding, reading, and reading disability. Remedial and Special Education, 7(1), 6 - 10. https://doi.org/10.1177/074193258600700104

Hattie, J. (2009). Visible learning. Routledge.

Juel, C. (1988). Learning to read and write: A longitudinal study of 54 children from first through fourth grades. Journal of Educational Psychology, 80(4), 437 - 447. https://doi.org/10.1037/0022-0663.80.4.437

Lovett, M., Frijters, J.C., Wolf, M., Steinback, K.A., Sevcik, R.A., Morris, R.D. (2017). Early intervention for children at risk for reading disabilties: The impact of grade at intervention and individual differences on intervention outcomes. Journal of Educational Psychology, 109(7), 889 - 914. https://doi.org/10.1037/edu0000181

McMaster, K. L., & Fuchs, D. (2016). Classwide intervention using peer-assisted learning strategies. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of multi-tiered systems of support (pp. 253–268). Springer. https://doi.org/10.1007/978-1-4899-7568-3_15

Miciak, J. & Fletcher, J.M. (2020). The critical role of instructional response for identifying dyslexia and other learning disabilities. Journal of Learning Disabilities, 53(5), 343 - 353. https://doi.org/10.1177/0022219420906801

Ontario Ministry of Education. ​​(2010). Growing Success: Assessment, evaluation and reporting in Ontario's schools: Covering grades 1 to 12. Ministry of Education.

Parker, D.C., Zaslofsky, A.F., Burns, M.K., Kanive, R., Hodgson, J., Scholin, S.E. & Klingbeil, D.A. (2015) A brief report of the diagnostic accuracy of oral reading fluency and reading inventory levels for reading failure risk among second and third grade students. Reading & Writing Quarterly, 31(1), 56 - 67. doi: 10.1080/10573569.2013.857970

Spear-Swerling, L. (2018). Structured literacy and typical literacy practices: Understanding differences to create instructional opportunities. Teaching Exceptional Children, 51(3), 201–211.

Stevens, E. A., Walker, M. A., & Vaughn, S. (2017). The effects of reading fluency interventions on the reading fluency and reading comprehension performance of elementary students with Learning Disabilities: A synthesis of the research from 2001 to 2014. Journal of Learning Disabilities, 50(5), 576–590. doi: 10.1177/0022219416638028

Torgesen, J.K. (2004). Avoiding the devastating downward spiral: The evidence that early intervention prevents reading failure. American Educator, 28, 6-19.

What are the 3 main components that contribute to reading comprehension?

Decoding, fluency, and vocabulary skills are key to reading comprehension.

What are the methods of teaching comprehension?

​General Strategies for Reading Comprehension.
Using Prior Knowledge/Previewing. ... .
Predicting. ... .
Identifying the Main Idea and Summarization. ... .
Questioning. ... .
Making Inferences. ... .
Visualizing. ... .
Story Maps. ... .
Retelling..

What are 5 evidence based teaching strategies for teaching fluency?

Fluency teaching activities with good evidence of effectiveness include:.
Repeated reading..
Choral reading..
Echo reading..
Paired/partner reading..
Readers theatre..
Audio-assisted reading..
Varied practice..

What are the 5 reading comprehension strategies?

There are 5 separate strategies that together form the High 5 Reading Strategy..
Activating background knowledge. Research has shown that better comprehension occurs when students are engaged in activities that bridge their old knowledge with the new. ... .
Questioning. ... .
Analyzing text structure. ... .
Visualization. ... .
Summarizing..