<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>screening &amp;mdash; Language &amp; Literacy</title>
    <link>https://languageandliteracy.blog/tag:screening</link>
    <description>Musings about language and literacy and learning</description>
    <pubDate>Wed, 15 Apr 2026 18:37:18 +0000</pubDate>
    
    <item>
      <title>Are Interim Assessments a Waste of Time?</title>
      <link>https://languageandliteracy.blog/are-interim-assessments-a-waste-of-time?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[There was a relatively recent Hechinger Report article by Jill Barshay, “PROOF POINTS: Researchers blast data analysis for teachers to help students” that seemed to indict any and all assessments and data use in schools as a royal waste of time. It bothered me because the only source cited explicitly in the article was a 2020 opinion piece by a professor who similarly vaguely discusses “interim assessment” and doesn’t provide explicit citations of her sources.&#xA;&#xA;I tweeted out my annoyance to this effect.&#xA;&#xA;To Ms. Barshay’s great credit, she responded with equanimity and generosity to my tweet with multiple citations.&#xA;&#xA;Since she took that time for me, I wanted to reciprocate by taking the time to review her sources with an open mind, as well as reflect on where I might land after doing so.&#xA;&#xA;!--more--&#xA;&#xA;BUT I’ve had this post sitting in my drafts for months now, and realized I’d never do quite the deep and full analysis I might prefer due to limited time. Instead, I’m just going to bullet a short relevant summary quote for each of the sources below:&#xA;&#xA;Cordray, D., Pion, G., Brandt, C., Molefe, A, &amp; Toby, M. (2012). The Impact of the Measures of Academic Progress (MAP) Program on Student Reading Achievement. (NCEE 2013–4000). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.&#xA;&#xA;  “Overall, the MAP program did not have a statistically significant impact on students’ reading achievement in either grade 4 or grade 5.”&#xA;&#xA;Faria, A.-M., Heppen, J., Li, Y., Stachel, S., Jones, W., Sawyer, K., Thomsen, K., Kutner, M., Miser, D., Lewis, S., Casserly, M., Simon, C., Uzzell, R., Corcoran, A., &amp; Palacios, M. (2012). Charting Success: Data Use and Student Achievement in Urban Schools. In Council of the Great City Schools. Council of the Great City Schools. https://eric.ed.gov/?id=ED536748&#xA;&#xA;  “the more that teachers and principals reported reviewing and analyzing student data and using this information to make instructional decisions, the higher their students’ achievement, at least in some grades and subjects. Moreover for principals, the more they reported having support in the form of an appropriate data infrastructure, adequate time for review and discussion of data, professional development, and the appropriate human resources, the higher their students’ achievement.”&#xA;  “The results also appear to be in line with previous research that suggests that having interim assessments may be helpful but not sufficient to produce positive changes in student achievement.”&#xA;  “Although these findings do not identify the specific aspects of each dimension that are most important, it appears that data use by principals, particularly in elementary school, may be as important as teacher data use. This is in line with the findings from our site visits (as well as prevailing wisdom) that suggest that leadership and support from the administration are critical.”&#xA;&#xA;Henderson, S., Petrosino, A., Guckenburg, S., &amp; Hamilton, S. (2007). Measuring how benchmark assessments affect student achievement (Issues &amp; Answers Report, REL 2007–No. 039). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Northeast and Islands. Retrieved from http://ies.ed.gov/ncee/edlabs&#xA;&#xA;  “The study found no immediate statistically significant or substantively important difference between the program and comparison schools. That finding might, however, reflect limitations in the data rather than the ineffectiveness of benchmark assessments.”&#xA;  “Some nontrivial effects for subgroups might be masked by comparing school mean scores.”&#xA;&#xA;Henderson, S., Petrosino, A., Guckenburg, S., &amp; Hamilton, S. (2008). REL Technical Brief—a second follow-up year for “Measuring how benchmark assessments affect student achievement” (REL Technical Brief, REL Northeast and Islands 2007–No. 002). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Northeast and Islands. Retrieved from http://ies.ed.gov/ncee/edlabs&#xA;&#xA;  “The follow-up study finds no significant differences [in grade 8 math] between schools using [benchmark assessments] and those not doing so after two years.”&#xA;&#xA;Konstantopoulos, S., Li, W., Miller, S. R., &amp; van der Ploeg, A. (2016). Effects of Interim Assessments Across the Achievement Distribution. Educational and Psychological Measurement, 76(4), 587–608. https://doi.org/10.1177/0013164415606498&#xA;&#xA;  “The findings in Grades 3 to 8 overall suggest that Acuity had a positive and significant impact in various quantiles of the mathematics achievement distribution. The magnitude of the effects was consistently greater than one sixth of a SD. In contrast, in reading only the 10th quantile estimate was positive and significant. The findings in Grades K-2 overall suggest that the effect of mCLASS on mathematics or reading scores across the achievement distribution was small and not statistically different than zero.”&#xA;&#xA;Konstantopoulos, S., Miller, S., van der Ploeg, A., Li, C.-H., &amp; Traynor, A. (2011). The Impact of Indiana’s System of Diagnostic Assessments on Mathematics Achievement. In Society for Research on Educational Effectiveness. Society for Research on Educational Effectiveness. https://eric.ed.gov/?id=ED528756&#xA;&#xA;  “it is unclear that the intervention had any systematic effects on student achievement except for fifth grade mathematics.”&#xA;&#xA;Some additional sources beyond Barshay’s to consider in relation to this topic:&#xA;&#xA;Tim Shanahan has a blog, “Do Screening and Monitoring Tests Really Help?” that does a nice job summarizing a number of additional sources around screening and monitoring for literacy.&#xA;&#xA;  “the evidence supporting the use of such testing to improve reading achievement is neither strong nor straightforward. The pieces are there, but the connections are a bit shaky.”&#xA;  “My conclusions, from all this evidence, is that it is possible to make effective the kind of assessment that you are complaining about. However, it should also be evident that such efforts too often fail to deliver on those promises.”&#xA;“In many schools/districts/states, we are overdoing it! The only reason to test someone is to find out something that you don’t know. If you know students are struggling with decoding, testing them to prove it doesn’t add much.”&#xA;  “The point of all this testing is to reshape your teaching to ensure that kids learn. Unfortunately, these heavy investments in assessment aren’t always (or even usually) accompanied by similar exertions in the differentiation arena.”&#xA;&#xA;Paly, B. J., Klingbeil, D. A., Clemens, N. H., &amp; Osman, D. J. (2022). A cost-effectiveness analysis of four approaches to universal screening for reading risk in upper elementary and middle school. Journal of School Psychology, 92, 246–264. https://doi.org/10.1016/j.jsp.2022.03.009&#xA;&#xA;  “The results suggest that the use of prior-year statewide achievement test data alone in Grades 4–8 is an efficient approach to universal screening for reading risk that may allow schools to shift resources from screening to other educational priorities.”&#xA;&#xA;Heckman, J., Zhou, J. (2022) Measuring Knowledge. IZA Institute of Labor Economics, IZA DP No. 15252. Retrieved from https://www.iza.org/publications/dp/15252/measuring-knowledge&#xA;&#xA;  “Value-added measures are widely used to measure the output of schools. Aggregate test scores are used to measure gaps in skills across demographic groups. This paper shows that this practice is unwise. The aggregate measures used to chart student gains, child development, and the contribution of teachers and caregivers to student development are not comparable over time and persons except, possibly, for narrowly defined measures of skill.”&#xA;&#xA;OK, so this review was admittedly cursory. But even just pulling out the key findings shows that the issue of district and school-wide assessments is not totally clear-cut in either direction. There’s some positive results here and there, but they are mixed.&#xA;&#xA;Definitely some food for thought, wherever one might stand. So where am I?&#xA;&#xA;I’m no a fan of over-testing, I don’t think many schools use data effectively for a wide variety of reasons, and I’ve seen how assessment data can be easily misinterpreted to reinforce deficit mindsets for Black students, ELLs, students with disabilities, and other typically marginalized students. And I believe the balance of attention should lie far more heavily on the side of formative, rather than summative/evaluative, use of assessments.&#xA;&#xA;That said, I also believe in the need for accountability and more objective measurements through the use of external, “standardized” (i.e. normed and validated) assessments, and I have seen that when triangulated with multiple sources, including qualitative sources, and discussed as a team using a structured protocol—and most importantly, discussed with students themselves—data can be a powerful tool for equity, empowerment, and responsive instructional supports. Furthermore, interim assessments, here meaning valid and reliable assessments used at strategic points throughout the school year, can be used to measure growth in a formative sense and to inform structural decisions at the school or grade/department-level that can provide more adaptive supports to many more students.&#xA;&#xA;That said, however, I also want to draw a clear line in the sand between “screening” and “benchmarks/interim assessments.” Interim assessments can be used for screening purposes, and screening can be done throughout the school year and thus become synonymous with interim assessment, but not all interim assessments are ideal screeners. This may sound like splitting hairs, but I think it’s a critical distinction, because screening is about efficient and proactive identification of need—which may lead to further data collection and analysis—while interim assessments can often be time consuming without providing more granular information. What’s a good example of a screening tool that fits this function? The DIBELS/Acadience Oral Reading Fluency (ORF) measure is a good example. It’s quick, relatively easy to administer, normed, and has a solid research base behind its use for this purpose. Another good example is the CUBED preK-3 suite of assessments, such as the NLM Reading and Listening tools. Such screening tools can provide an efficient distinction of students at possible risk of struggling to read, and if used in this way, can lead to preventative action and interventions that can improve outcomes.&#xA;&#xA;Any source of quantitative data is potentially questionable, so the more that’s available and able to be contextualized alongside of qualitative to build a coherent story, the better, in my view.&#xA;&#xA;But the key is that we never lose sight of the children in front of us and their ever-evolving, dynamic strengths and needs. Data must inform precise supports and be used to build a collaborative story, a story that empowers both students and the adults who work with them with the language of growth, potential, and clear instructional goals.&#xA;&#xA;Anything else is a distraction.&#xA;&#xA;#assessment #reading #screening #literacy #data #research&#xA;&#xA;a href=&#34;https://remark.as/p/languageandliteracy.blog/are-interim-assessments-a-waste-of-time&#34;Discuss.../a]]&gt;</description>
      <content:encoded><![CDATA[<p>There was a relatively recent Hechinger Report article by Jill Barshay, <a href="https://hechingerreport.org/proof-points-researchers-blast-data-analysis-for-teachers-to-help-students/">“<em>PROOF POINTS: Researchers blast data analysis for teachers to help students</em>”</a> that seemed to indict any and all assessments and data use in schools as a royal waste of time. It bothered me because the only source cited explicitly in the article was <a href="https://www.edweek.org/technology/opinion-does-studying-student-data-really-raise-test-scores/2020/02">a 2020 opinion piece</a> by a professor who similarly vaguely discusses “interim assessment” and doesn’t provide explicit citations of her sources.</p>

<p>I <a href="https://x.com/mandercorn/status/1498477332700536832?s=20">tweeted out my annoyance</a> to this effect.</p>

<p>To Ms. Barshay’s great credit, <a href="https://x.com/jillbarshay/status/1498742114083127310?s=20">she responded</a> with equanimity and generosity to my tweet <em>with multiple citations.</em></p>

<p>Since she took that time for me, I wanted to reciprocate by taking the time to review her sources with an open mind, as well as reflect on where I might land after doing so.</p>



<p>BUT I’ve had this post sitting in my drafts for months now, and realized I’d never do quite the deep and full analysis I might prefer due to limited time. Instead, I’m just going to bullet a short relevant summary quote for each of the sources below:</p>
<ol><li>Cordray, D., Pion, G., Brandt, C., Molefe, A, &amp; Toby, M. (2012). The Impact of the Measures of Academic Progress (MAP) Program on Student Reading Achievement. (NCEE 2013–4000). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.</li></ol>

<blockquote><p>“Overall, the MAP program did not have a statistically significant impact on students’ reading achievement in either grade 4 or grade 5.”</p></blockquote>
<ol><li>Faria, A.-M., Heppen, J., Li, Y., Stachel, S., Jones, W., Sawyer, K., Thomsen, K., Kutner, M., Miser, D., Lewis, S., Casserly, M., Simon, C., Uzzell, R., Corcoran, A., &amp; Palacios, M. (2012). Charting Success: Data Use and Student Achievement in Urban Schools. In Council of the Great City Schools. Council of the Great City Schools. <a href="https://eric.ed.gov/?id=ED536748">https://eric.ed.gov/?id=ED536748</a></li></ol>

<blockquote><p>“the more that teachers and principals reported reviewing and analyzing student data and using this information to make instructional decisions, the higher their students’ achievement, at least in some grades and subjects. Moreover for principals, the more they reported having support in the form of an appropriate data infrastructure, adequate time for review and discussion of data, professional development, and the appropriate human resources, the higher their students’ achievement.”
“The results also appear to be in line with previous research that suggests that having interim assessments may be helpful but not sufficient to produce positive changes in student achievement.”
“Although these findings do not identify the specific aspects of each dimension that are most important, it appears that data use by principals, particularly in elementary school, may be as important as teacher data use. This is in line with the findings from our site visits (as well as prevailing wisdom) that suggest that leadership and support from the administration are critical.”</p></blockquote>
<ol><li>Henderson, S., Petrosino, A., Guckenburg, S., &amp; Hamilton, S. (2007). Measuring how benchmark assessments affect student achievement (Issues &amp; Answers Report, REL 2007–No. 039). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Northeast and Islands. Retrieved from <a href="http://ies.ed.gov/ncee/edlabs">http://ies.ed.gov/ncee/edlabs</a></li></ol>

<blockquote><p>“The study found no immediate statistically significant or substantively important difference between the program and comparison schools. That finding might, however, reflect limitations in the data rather than the ineffectiveness of benchmark assessments.”
“Some nontrivial effects for subgroups might be masked by comparing school mean scores.”</p></blockquote>
<ol><li>Henderson, S., Petrosino, A., Guckenburg, S., &amp; Hamilton, S. (2008). REL Technical Brief—a second follow-up year for “Measuring how benchmark assessments affect student achievement” (REL Technical Brief, REL Northeast and Islands 2007–No. 002). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Northeast and Islands. Retrieved from <a href="http://ies.ed.gov/ncee/edlabs">http://ies.ed.gov/ncee/edlabs</a></li></ol>

<blockquote><p>“The follow-up study finds no significant differences [in grade 8 math] between schools using [benchmark assessments] and those not doing so after two years.”</p></blockquote>
<ol><li>Konstantopoulos, S., Li, W., Miller, S. R., &amp; van der Ploeg, A. (2016). Effects of Interim Assessments Across the Achievement Distribution. Educational and Psychological Measurement, 76(4), 587–608. <a href="https://doi.org/10.1177/0013164415606498">https://doi.org/10.1177/0013164415606498</a></li></ol>

<blockquote><p>“The findings in Grades 3 to 8 overall suggest that Acuity had a positive and significant impact in various quantiles of the mathematics achievement distribution. The magnitude of the effects was consistently greater than one sixth of a SD. In contrast, in reading only the 10th quantile estimate was positive and significant. The findings in Grades K-2 overall suggest that the effect of mCLASS on mathematics or reading scores across the achievement distribution was small and not statistically different than zero.”</p></blockquote>
<ol><li>Konstantopoulos, S., Miller, S., van der Ploeg, A., Li, C.-H., &amp; Traynor, A. (2011). The Impact of Indiana’s System of Diagnostic Assessments on Mathematics Achievement. In Society for Research on Educational Effectiveness. Society for Research on Educational Effectiveness. <a href="https://eric.ed.gov/?id=ED528756">https://eric.ed.gov/?id=ED528756</a></li></ol>

<blockquote><p>“it is unclear that the intervention had any systematic effects on student achievement except for fifth grade mathematics.”</p></blockquote>

<p>Some additional sources beyond Barshay’s to consider in relation to this topic:</p>
<ol><li>Tim Shanahan has a blog, <a href="https://www.shanahanonliteracy.com/blog/do-screening-and-monitoring-tests-really-help">“Do Screening and Monitoring Tests Really Help?”</a> that does a nice job summarizing a number of additional sources around screening and monitoring for literacy.</li></ol>

<blockquote><p>“the evidence supporting the use of such testing to improve reading achievement is neither strong nor straightforward. The pieces are there, but the connections are a bit shaky.”
“My conclusions, from all this evidence, is that it is possible to make effective the kind of assessment that you are complaining about. However, it should also be evident that such efforts too often fail to deliver on those promises.”
“In many schools/districts/states, we are overdoing it! The only reason to test someone is to find out something that you don’t know. If you know students are struggling with decoding, testing them to prove it doesn’t add much.”
“The point of all this testing is to reshape your teaching to ensure that kids learn. Unfortunately, these heavy investments in assessment aren’t always (or even usually) accompanied by similar exertions in the differentiation arena.”</p></blockquote>
<ol><li>Paly, B. J., Klingbeil, D. A., Clemens, N. H., &amp; Osman, D. J. (2022). A cost-effectiveness analysis of four approaches to universal screening for reading risk in upper elementary and middle school. Journal of School Psychology, 92, 246–264. <a href="https://doi.org/10.1016/j.jsp.2022.03.009">https://doi.org/10.1016/j.jsp.2022.03.009</a></li></ol>

<blockquote><p>“The results suggest that the use of prior-year statewide achievement test data alone in Grades 4–8 is an efficient approach to universal screening for reading risk that may allow schools to shift resources from screening to other educational priorities.”</p></blockquote>
<ol><li>Heckman, J., Zhou, J. (2022) Measuring Knowledge. IZA Institute of Labor Economics, IZA DP No. 15252. Retrieved from <a href="https://www.iza.org/publications/dp/15252/measuring-knowledge">https://www.iza.org/publications/dp/15252/measuring-knowledge</a></li></ol>

<blockquote><p>“Value-added measures are widely used to measure the output of schools. Aggregate test scores are used to measure gaps in skills across demographic groups. This paper shows that this practice is unwise. The aggregate measures used to chart student gains, child development, and the contribution of teachers and caregivers to student development are not comparable over time and persons except, possibly, for narrowly defined measures of skill.”</p></blockquote>

<p>OK, so this review was admittedly cursory. But even just pulling out the key findings shows that the issue of district and school-wide assessments is not totally clear-cut in either direction. There’s some positive results here and there, but they are mixed.</p>

<p>Definitely some food for thought, wherever one might stand. So where am I?</p>

<p>I’m no a fan of over-testing, I don’t think many schools use data effectively for a wide variety of reasons, and I’ve seen how assessment data can be easily misinterpreted to reinforce deficit mindsets for Black students, ELLs, students with disabilities, and other typically marginalized students. And I believe the balance of attention should lie far more heavily on the side of formative, rather than summative/evaluative, use of assessments.</p>

<p>That said, I also believe in the need for accountability and more objective measurements through the use of external, “standardized” (i.e. normed and validated) assessments, and I have seen that when triangulated with multiple sources, including qualitative sources, and discussed as a team using a structured protocol—and most importantly, <em>discussed with students themselves</em>—data can be a powerful tool for equity, empowerment, and responsive instructional supports. Furthermore, interim assessments, here meaning valid and reliable assessments used at strategic points throughout the school year, can be used to measure growth in a formative sense and to inform structural decisions at the school or grade/department-level that can provide more adaptive supports to many more students.</p>

<p>That said, however, I also want to draw a clear line in the sand between “screening” and “benchmarks/interim assessments.” Interim assessments can be used for screening purposes, and screening can be done throughout the school year and thus become synonymous with interim assessment, <em>but not all interim assessments are ideal screeners</em>. This may sound like splitting hairs, but I think it’s a critical distinction, because screening is about <em>efficient</em> and proactive identification of need—which may lead to further data collection and analysis—while interim assessments can often be time consuming without providing more granular information. What’s a good example of a screening tool that fits this function? The DIBELS/Acadience Oral Reading Fluency (ORF) measure is a good example. It’s quick, relatively easy to administer, normed, and has a solid research base behind its use for this purpose. Another good example is the CUBED preK-3 suite of assessments, such as the NLM Reading and Listening tools. Such screening tools can provide an efficient distinction of students at possible risk of struggling to read, and if used in this way, can lead to preventative action and interventions that can improve outcomes.</p>

<p>Any source of quantitative data is potentially questionable, so the more that’s available and able to be contextualized alongside of qualitative to build a coherent story, the better, in my view.</p>

<p>But the key is that we never lose sight of the children in front of us and their ever-evolving, dynamic strengths and needs. Data must inform precise supports and be used to build a collaborative story, a story that empowers both students and the adults who work with them with the language of growth, potential, and clear instructional goals.</p>

<p>Anything else is a distraction.</p>

<p><a href="https://languageandliteracy.blog/tag:assessment" class="hashtag"><span>#</span><span class="p-category">assessment</span></a> <a href="https://languageandliteracy.blog/tag:reading" class="hashtag"><span>#</span><span class="p-category">reading</span></a> <a href="https://languageandliteracy.blog/tag:screening" class="hashtag"><span>#</span><span class="p-category">screening</span></a> <a href="https://languageandliteracy.blog/tag:literacy" class="hashtag"><span>#</span><span class="p-category">literacy</span></a> <a href="https://languageandliteracy.blog/tag:data" class="hashtag"><span>#</span><span class="p-category">data</span></a> <a href="https://languageandliteracy.blog/tag:research" class="hashtag"><span>#</span><span class="p-category">research</span></a></p>

<p><a href="https://remark.as/p/languageandliteracy.blog/are-interim-assessments-a-waste-of-time">Discuss...</a></p>
]]></content:encoded>
      <guid>https://languageandliteracy.blog/are-interim-assessments-a-waste-of-time</guid>
      <pubDate>Sat, 16 Apr 2022 01:26:59 +0000</pubDate>
    </item>
    <item>
      <title>The Science of Reading and Cancer</title>
      <link>https://languageandliteracy.blog/the-science-of-reading-and-cancer?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I have somewhat eclectic book reading habits, and I take pleasure in reading haphazardly (i.e. whatever I happen to come across). After growing bored with Moby Dick recently, I happened across a copy of Siddhartha Mukerjee’s The Emperor of All Maladies: A Biography of Cancer.&#xA;&#xA;The book is compellingly written, narrating an expansive overview of the history of the treatment of cancer, while at the same time painting portraits of individual researchers, clinicians, and patients that draws the reader in. It makes oncology research and clinical practice sound exciting, which is no small feat.&#xA;&#xA;!--more--&#xA;&#xA;As I read Mukerjee’s book, I began drawing parallels between the slow but accumulating body of knowledge on cancers to research on literacy development.&#xA;&#xA;Cancer, just like reading, has seen massive investments and nationwide commitments to improve outcomes, yet with seemingly little to show for all the rhetoric, money, and effort. And just as with reading, in the absence of clear evidence and knowledge, there have been ego-driven and problematic practices and treatments, and many strong assertions with little data.&#xA;&#xA;Yet through Mukerjee’s telling, it becomes clear that however zigzagging and plodding and subject to the whims of character and fortune, science and knowledge has slowly advanced and that we now have an understanding of various forms of cancer that–while no silver bullets exist for all cancers–we do have an arsenal of screening and specific treatments for specific forms of cancer that we can wield, and that however incremental, the field is advancing.&#xA;&#xA;In reading and literacy and language research, it can feel at times like the “science” is a matter of opinion, and that we know not much at all. There are many who discount the value of empirical research in the field of education completely. And yet, I think we would do well to heed the history of cancer to see not only the progress we have made and can make, but to at the same time bear greater cautiousness against overzealous claims by silver bullet enthusiasts.&#xA;&#xA;One passage in particular, describing the endeavors of Henry Kaplan, a radiologist in the 1960s, got me started in thinking along these lines:&#xA;&#xA;  This simple principle—the meticulous matching of a particular therapy to a particular form and stage of cancer—would eventually be given its due merit in cancer therapy. Early-stage, local cancers, Kaplan realized, were often inherently different from widely spread, metastatic cancers—even within the same form of cancer. A hundred instances of Hodgkin’s disease, even though pathologically classified as the same entity, were a hundred variants around a common theme. Cancers possessed temperaments, personalities—behaviors. And biological heterogeneity demanded therapeutic heterogeneity; the same treatment could not indiscriminately be applied to all.&#xA;&#xA;The insight described here is that while we use one term to describe the phenomena of “cancer,” researchers began to increasingly realize that different cancers manifested in incredibly diverse ways, and thus required similarly diverse approaches in treatment.&#xA;&#xA;Prior to this insight, a silver bullet was sought against all forms of cancer, and all kind of ego-driven practices and over extrapolations of unclear research led to, for example, mastectomies that tore out nearly everything from the shoulder to the ribs, in the zealous belief that cancer would be rooted out.&#xA;&#xA;How often do we hear “dyslexia” described as a general construct that requires a silver bullet solution? Yet increasing research demonstrates the genetic and biological variation in individual brain development that can manifest in difficulty with literacy or language — and may thus require differing forms of instruction and supports.&#xA;&#xA;What are the implications for assessment? Here’s another passage that stood out on this idea of heterogeneity:&#xA;&#xA;  But although these alternatives did not offer definitive cures, several important principles of cancer biology and cancer therapy were firmly cemented in these powerful trials. First, as Kaplan had found with Hodgkin’s disease, these trials again clearly etched the message that cancer was enormously heterogeneous. Breast or prostate cancers came in an array of forms, each with unique biological behaviors. The heterogeneity was genetic: in breast cancer, for instance, some variants responded to hormonal treatment, while others were hormone-unresponsive. And the heterogeneity was anatomic: some cancers were localized to the breast when detected, while others had a propensity to spread to distant organs.&#xA;&#xA;  Second, understanding that heterogeneity was of deep consequence. “Know thine enemy” runs the adage, and Fisher’s and Bonadonna’s trials had shown that it was essential to “know” the cancer as intimately as possible before rushing to treat it.&#xA;&#xA;I want to be careful about drawing too closely on an extended analogy between cancer and reading — but there is a similar need in schools to build more precise and accurate profiles of students to ensure the right form of instruction and intervention. We often land on the simple distinction of “students not meeting standards,” then rely on item analysis of standards (which are at a composite level of performance), rather than identifying the underlying literacy and language skills that could be targeted for further support.&#xA;&#xA;Here’s another passage on cancer screening, which certainly has some similarities to screening for reading and language difficulty in schools:&#xA;&#xA;  In cancer, where both overdiagnosis and underdiagnosis come at high costs, finding that exquisite balance is often impossible. We want every cancer test to operate with perfect specificity and sensitivity. But the technologies for screening are not perfect. . . .&#xA;&#xA;  No; merely detecting a small tumor is not sufficient. Cancer demonstrates a spectrum of behavior. . . To address the inherent behavioral heterogeneity of cancer, the screening test must go further. It must increase survival.&#xA;&#xA;For screening in schools, it must increase literacy attainment. And this is where the rubber hits the road. Even when a school is drowning in data, it does not mean the needed action will be undertaken, either to improve core instruction across classrooms, or for putting in place the right interventions for the right groups of students at the right time.&#xA;&#xA;The academic specializations that result in terminology so precise it is opaque to those outside of that domain may seem extremely distant from classroom practice, but I don’t see how we can make headway until we more fully unpack how, when, and where learning happens in the brain in relation to its body and its environment, while at the same time identifying the forms and use of language and literacy that are most fundamental.&#xA;&#xA;Despite the great heterogeneity of cancer, scientists have begun to recognize some universal understandings that is leading to more effective treatments:&#xA;&#xA;  Biologists looking directly into cancer’s maw now recognized that roiling beneath the incredible heterogeneity of cancer were behaviors, genes, and pathways. . . . Notably, Weinberg and Hanahan wrote, these six rules were not abstract descriptions of cancer’s behavior. Many of the genes and pathways that enabled each of these six behaviors had concretely been identified—ras, myc, Rb, to name just a few. The task now was to connect this causal understanding of cancer’s deep biology to the quest for its cure . . . The mechanistic maturity of cancer science would create a new kind of cancer medicine, Weinberg and Hanahan posited: “With holistic clarity of mechanism, cancer prognosis and treatment will become a rational science, unrecognizable by current practitioners.” Having wandered in the darkness for decades, scientists had finally reached a clearing in their understanding of cancer. Medicine’s task was to continue that journey toward a new therapeutic attack.&#xA;&#xA;We are beginning to recognize some universals and particulars of language and literacy, as well. We can improve literacy outcomes for our society. It just may be much more complex and progress much slower than we’d like to think.&#xA;&#xA;I’m not being a Pollyanna here on either front, by the way. My father died of lymphoma last December after being diagnosed in October, and his doctors seemed just as surprised as us when his artery ruptured suddenly just as his 3rd round of chemo treatment began. Some forms of cancer will continue to kill us prematurely, despite our best efforts based on our current understanding of the research and the technological tools in our arsenal. And some children will continue to struggle to read and write fluently, despite the concerted efforts of many committed educators.&#xA;&#xA;My sincere hope is that every casualty along the way provides new learning that can inform improvement. If we learn from every failure, than each failure will not be in vain.&#xA;&#xA;#cancer #language #literacy #reading #dyslexia #screening #SiddharthaMukerjee #research #knowledge #heterogeneity&#xA;&#xA;a href=&#34;https://remark.as/p/languageandliteracy.blog/the-science-of-reading-and-cancer&#34;Discuss.../a]]&gt;</description>
      <content:encoded><![CDATA[<p>I have somewhat eclectic book reading habits, and I take pleasure in reading haphazardly (i.e. whatever I happen to come across). After <a href="https://twitter.com/mandercorn/status/1434297825999921152?s=20">growing bored with Moby Dick</a> recently, I happened across a copy of Siddhartha Mukerjee’s <em>The Emperor of All Maladies: A Biography of Cancer</em>.</p>

<p>The book is compellingly written, narrating an expansive overview of the history of the treatment of cancer, while at the same time painting portraits of individual researchers, clinicians, and patients that draws the reader in. It makes oncology research and clinical practice sound exciting, which is no small feat.</p>



<p>As I read Mukerjee’s book, I began drawing parallels between the slow but accumulating body of knowledge on cancers to research on literacy development.</p>

<p>Cancer, just like reading, has seen massive investments and nationwide commitments to improve outcomes, yet with seemingly little to show for all the rhetoric, money, and effort. And just as with reading, in the absence of clear evidence and knowledge, there have been ego-driven and problematic practices and treatments, and many strong assertions with little data.</p>

<p>Yet through Mukerjee’s telling, it becomes clear that however zigzagging and plodding and subject to the whims of character and fortune, science and knowledge has slowly advanced and that we now have an understanding of various forms of cancer that–while no silver bullets exist for all cancers–we do have an arsenal of screening and specific treatments for specific forms of cancer that we can wield, and that however incremental, the field is advancing.</p>

<p>In reading and literacy and language research, it can feel at times like the “science” is a matter of opinion, and that we know not much at all. There are many who discount the value of empirical research in the field of education completely. And yet, I think we would do well to heed the history of cancer to see not only the progress we have made and can make, but to at the same time bear greater cautiousness against overzealous claims by silver bullet enthusiasts.</p>

<p>One passage in particular, describing the endeavors of Henry Kaplan, a radiologist in the 1960s, got me started in thinking along these lines:</p>

<blockquote><p>This simple principle—the meticulous matching of a particular therapy to a particular form and stage of cancer—would eventually be given its due merit in cancer therapy. Early-stage, local cancers, Kaplan realized, were often inherently different from widely spread, metastatic cancers—even within the same form of cancer. A hundred instances of Hodgkin’s disease, even though pathologically classified as the same entity, were a hundred variants around a common theme. Cancers possessed temperaments, personalities—behaviors. And biological heterogeneity demanded therapeutic heterogeneity; the same treatment could not indiscriminately be applied to all.</p></blockquote>

<p>The insight described here is that while we use one term to describe the phenomena of “cancer,” researchers began to increasingly realize that different cancers manifested in incredibly diverse ways, and thus required similarly diverse approaches in treatment.</p>

<p>Prior to this insight, a silver bullet was sought against all forms of cancer, and all kind of ego-driven practices and over extrapolations of unclear research led to, for example, mastectomies that tore out nearly everything from the shoulder to the ribs, in the zealous belief that cancer would be rooted out.</p>

<p>How often do we hear “dyslexia” described as a general construct that requires a silver bullet solution? Yet increasing research demonstrates the genetic and biological variation in individual brain development that can manifest in difficulty with literacy or language — and may thus require differing forms of instruction and supports.</p>

<p>What are the implications for assessment? Here’s another passage that stood out on this idea of heterogeneity:</p>

<blockquote><p>But although these alternatives did not offer definitive cures, several important principles of cancer biology and cancer therapy were firmly cemented in these powerful trials. First, as Kaplan had found with Hodgkin’s disease, these trials again clearly etched the message that cancer was enormously heterogeneous. Breast or prostate cancers came in an array of forms, each with unique biological behaviors. The heterogeneity was genetic: in breast cancer, for instance, some variants responded to hormonal treatment, while others were hormone-unresponsive. And the heterogeneity was anatomic: some cancers were localized to the breast when detected, while others had a propensity to spread to distant organs.</p>

<p>Second, understanding that heterogeneity was of deep consequence. “Know thine enemy” runs the adage, and Fisher’s and Bonadonna’s trials had shown that it was essential to “know” the cancer as intimately as possible before rushing to treat it.</p></blockquote>

<p>I want to be careful about drawing too closely on an extended analogy between cancer and reading — but there is a similar need in schools to build more precise and accurate profiles of students to ensure the right form of instruction and intervention. We often land on the simple distinction of “students not meeting standards,” then rely on item analysis of standards (which are at a composite level of performance), rather than identifying the underlying literacy and language skills that could be targeted for further support.</p>

<p>Here’s another passage on cancer screening, which certainly has some similarities to screening for reading and language difficulty in schools:</p>

<blockquote><p>In cancer, where both overdiagnosis and underdiagnosis come at high costs, finding that exquisite balance is often impossible. We want every cancer test to operate with perfect specificity and sensitivity. But the technologies for screening are not perfect. . . .</p>

<p>No; merely detecting a small tumor is not sufficient. Cancer demonstrates a spectrum of behavior. . . To address the inherent behavioral heterogeneity of cancer, the screening test must go further. It must increase survival.</p></blockquote>

<p>For screening in schools, it must increase literacy attainment. And this is where the rubber hits the road. Even when a school is drowning in data, it does not mean the needed action will be undertaken, either to improve core instruction across classrooms, or for putting in place the right interventions for the right groups of students at the right time.</p>

<p>The academic specializations that result in terminology so precise it is opaque to those outside of that domain may seem extremely distant from classroom practice, but I don’t see how we can make headway until we more fully unpack how, when, and where learning happens in the brain in relation to its body and its environment, while at the same time identifying the forms and use of language and literacy that are most fundamental.</p>

<p>Despite the great heterogeneity of cancer, scientists have begun to recognize some universal understandings that is leading to more effective treatments:</p>

<blockquote><p>Biologists looking directly into cancer’s maw now recognized that roiling beneath the incredible heterogeneity of cancer were behaviors, genes, and pathways. . . . Notably, Weinberg and Hanahan wrote, these six rules were not abstract descriptions of cancer’s behavior. Many of the genes and pathways that enabled each of these six behaviors had concretely been identified—ras, myc, Rb, to name just a few. The task now was to connect this causal understanding of cancer’s deep biology to the quest for its cure . . . The mechanistic maturity of cancer science would create a new kind of cancer medicine, Weinberg and Hanahan posited: “With holistic clarity of mechanism, cancer prognosis and treatment will become a rational science, unrecognizable by current practitioners.” Having wandered in the darkness for decades, scientists had finally reached a clearing in their understanding of cancer. Medicine’s task was to continue that journey toward a new therapeutic attack.</p></blockquote>

<p>We are <a href="https://languageandliteracy.blog/universals-of-language">beginning to recognize</a> some universals and particulars of language and literacy, as well. We can improve literacy outcomes for our society. It just may be much more complex and progress much slower than we’d like to think.</p>

<p>I’m not being a Pollyanna here on either front, by the way. My father died of lymphoma last December after being diagnosed in October, and his doctors seemed just as surprised as us when his artery ruptured suddenly just as his 3rd round of chemo treatment began. Some forms of cancer will continue to kill us prematurely, despite our best efforts based on our current understanding of the research and the technological tools in our arsenal. And some children will continue to struggle to read and write fluently, despite the concerted efforts of many committed educators.</p>

<p>My sincere hope is that every casualty along the way provides new learning that can inform improvement. If we <a href="https://schoolecosystem.org/2015/12/08/failure-uncertainty-risk/">learn from every failure</a>, than each failure will not be in vain.</p>

<p><a href="https://languageandliteracy.blog/tag:cancer" class="hashtag"><span>#</span><span class="p-category">cancer</span></a> <a href="https://languageandliteracy.blog/tag:language" class="hashtag"><span>#</span><span class="p-category">language</span></a> <a href="https://languageandliteracy.blog/tag:literacy" class="hashtag"><span>#</span><span class="p-category">literacy</span></a> <a href="https://languageandliteracy.blog/tag:reading" class="hashtag"><span>#</span><span class="p-category">reading</span></a> <a href="https://languageandliteracy.blog/tag:dyslexia" class="hashtag"><span>#</span><span class="p-category">dyslexia</span></a> <a href="https://languageandliteracy.blog/tag:screening" class="hashtag"><span>#</span><span class="p-category">screening</span></a> <a href="https://languageandliteracy.blog/tag:SiddharthaMukerjee" class="hashtag"><span>#</span><span class="p-category">SiddharthaMukerjee</span></a> <a href="https://languageandliteracy.blog/tag:research" class="hashtag"><span>#</span><span class="p-category">research</span></a> <a href="https://languageandliteracy.blog/tag:knowledge" class="hashtag"><span>#</span><span class="p-category">knowledge</span></a> <a href="https://languageandliteracy.blog/tag:heterogeneity" class="hashtag"><span>#</span><span class="p-category">heterogeneity</span></a></p>

<p><a href="https://remark.as/p/languageandliteracy.blog/the-science-of-reading-and-cancer">Discuss...</a></p>
]]></content:encoded>
      <guid>https://languageandliteracy.blog/the-science-of-reading-and-cancer</guid>
      <pubDate>Sun, 26 Sep 2021 00:44:33 +0000</pubDate>
    </item>
  </channel>
</rss>