<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>sor &amp;mdash; Language &amp; Literacy</title>
    <link>https://languageandliteracy.blog/tag:sor</link>
    <description>Musings about language and literacy and learning</description>
    <pubDate>Thu, 16 Apr 2026 06:25:00 +0000</pubDate>
    
    <item>
      <title>How you interpret “the science of reading” depends on how you think of “science”: Part IV</title>
      <link>https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of-22wt?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[This is Part IV in a series digging into two articles from Keith Stanovich that provides useful ways for educators to understand the science in the science of reading.&#xA;&#xA;In Part I, we examined a 2003 article that proposed 5 different “styles” that can influence how science is conducted and perceived.&#xA;&#xA;Since Part II, we’ve been unpacking a long and stellar 2003 piece by Paula and Keith Stanovich, “Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research To Make Curricular &amp; Instructional Decisions.”&#xA;&#xA;Today in Part IV, we continue onward deeper into the article to examine the oh-so very science-y aspects of experimental design.&#xA;!--more--&#xA;The Logic of the Experimental Method&#xA;&#xA;So here’s two words that will cause an immediate and negative gut reaction to many involved in education: manipulation and control. Yet according to the Stanovichs, these sinister concepts are essential to experimental design:&#xA;&#xA;  “The heart of the experimental method lies in manipulation and control. In contrast to a correlational study, where the investigator simply observes whether the natural fluctuation in two variables displays a relationship, the investigator in a true experiment manipulates the variable thought to be the cause (the independent variable) and looks for an effect on the variable thought to be the effect (the dependent variable) while holding all other variables constant by control and randomization.”&#xA;&#xA;It is interesting to think that there is a certain cold calculation to effective experimental design that may in fact present a barrier to executing more controlled studies in real schools. RCTs, for example, are incredibly difficult to do in schools because no one in a school in their right mind wants to withhold an intervention or resource from kids who are not currently receiving it that may be more effective. Which then makes having a control group really hard to sustain.&#xA;&#xA;There’s a lot related to these concepts of manipulation and control that brings us back to causal inferences and the importance of attempting to falsify our hypotheses, which we briefly discussed in Part III. Frankly, I find all these seemingly simple concepts hard to fully grasp. It’s made me feel slightly better to see real scientists posting deep explorations of the complexity of making accurate causal inferences on my Twitter timeline.&#xA;&#xA;I think in education we do have some grasp and application of these ideas. For example, concepts of team inquiry, problem-solving, and data-based decision-making are oriented around looking at student data and coming up with some kind of problem of practice and theory of change or hypothesis, then taking action steps to address it.&#xA;&#xA;The Stanovichs point this out at the close of this paper, in fact:&#xA;&#xA;  Effective teachers engage in scientific thinking in their classrooms in a variety of ways: when they assess and evaluate student performance, develop Individual Education Plans (IEPs) for their students with disabilities, reflect on their practice, or engage in action research.&#xA;&#xA;  This assessment cycle looks even more like the scientific method when teachers (as part of a multidisciplinary team) are developing and implementing an IEP for a student with a disability. The team must assess and evaluate the student’s learning strengths and difficulties, develop hypotheses about the learning problems, select curriculum goals and objectives, base instruction on the hypotheses and the goals selected, teach, and evaluate the outcomes of that teaching. If the teaching is successful (goals and objectives are attained), the cycle continues with new goals. If the teaching has been unsuccessful (goals and objectives have not been achieved), the cycle begins again with new hypotheses.&#xA;&#xA;Yet perhaps an area where we fall most short in terms of aligning more directly to the real scientific method is that we typically move confidently and brashly towards action without either strategically creating a group of students who do not receive the supports we think will be most effective, or considering in advance what evidence would both most clearly prove or falsify our theory of change. Instead, we often seem to default towards the notion that if we’ve taken action, we’ve achieved success. Everyone in education likes a good celebration, especially those in more political positions, but “confronting the brutal facts” in normed or standardized data seems to be a rarer exercise, other than as media headlines when the latest NAEP results come out. (For a model of an educator making this shift towards confront the brutal facts in literacy outcome data, please read this blog from The Right to Read Project).&#xA;&#xA;In the tech world, A/B testing is a well-known and common undertaking. Maybe we need to keep simple study designs like this in mind and start smaller in our change endeavors. Maybe it’s not always about one whole-sail theory of change, but rather about having a few different theories that we can keep testing in a simple ways, with different students, classes, or groups. Or maybe the flipside is better? I’ve explored the idea that coherence within and beyond a school may actually matter more than “research-based” practices elsewhere, so it’s an open question as to whether putting a firm directive out and asking everyone to get on board (more typical), versus opening up a space for teacher teams to try out different ways to meet a goal, is ultimately more effective in impacting collective outcomes for students.&#xA;&#xA;The Need for Both Correlational Methods and True Experiments&#xA;&#xA;To this end, the Stanovichs highlight a great point that there’s a need for multiple approaches to experimental design. We then must look across findings from these divergent approaches for converging evidence:&#xA;&#xA;  It is necessary to amalgamate the results from not only experimental investigations, but correlational studies, nonequivalent control group studies, time series designs, and various other quasi-experimental designs and multivariate correlational designs, all have their strengths and weaknesses. For example, it is often (but not always) the case that experimental investigations are high in internal validity, but limited in external validity, whereas correlational studies are often high in external validity, but low in internal validity.&#xA;&#xA;  Convergence increases our confidence in the external and internal validity of our conclusions.&#xA;&#xA;The Role of Case Studies and Qualitative Investigations&#xA;&#xA;I found this section of the paper, in which the Stanovichs explain how case studies and qualitative studies operate in conjunction with quantitative research, especially fascinating.&#xA;&#xA;In their explanation, case studies and qualitative investigations are most useful as explorations of a problem. Once a theory is more fully formed, however, ensuring it can be stress tested in full to “rule out alternative explanations” requires the “comparative information” provided by quantitative experiments.&#xA;&#xA;  Where qualitative investigations are useful relates strongly to a distinction in philosophy of science between the context of discovery and the context of justification. Qualitative research, case studies, and clinical observations support a context of discovery where, as Levin and O’Donnell (2000) note in an educational context, such research must be regarded as &#34;preliminary/exploratory, observational, hypothesis generating.”&#xA;&#xA;  In education, however, investigators sometimes claim to be pursuing Objective B but slide over into Objective A without realizing they have made a crucial switch. They want to make comparative, or quantitative, statements, but have not carried out the proper types of investigation to justify them.&#xA;&#xA;  Case studies and qualitative description lack the comparative information necessary to prove that a particular theory or educational practice is superior, because they fail to test an alternative; they rule nothing out. Take the seminal work of Jean Piaget for example. His studies were critical in pointing developmental psychology in new and important directions, but many of his theoretical conclusions and causal explanations did not hold up in controlled experiments.&#xA;&#xA;Woah.&#xA;&#xA;My thinking is that while case studies can be a good way to get an investigation of a problem started, I suspect they can also serve as a great bookend to the journey as well. In other words, I suspect that once a theory has some solid backing from converging evidence, drawing up case studies and gaining fuller context of its application in the real world with qualitative description can also be really beneficial.&#xA;&#xA;So: a qualitative-quantitative-qualitative sandwich! I don’t know, I’m not a researcher. But people seem to like case studies, so it seems like a good communication and educative tool, in addition to the beginning of an exploration.&#xA;&#xA;Teachers and Researchers: Commonality in a “What Works” Epistemology&#xA;&#xA;The Stanoviches close this tour-de-force exposition with some connections between the thinking necessary for science and how it can be leveraged by teachers:&#xA;&#xA;  Drawing upon personal experience is necessary and desirable in a veteran teacher, but it is not sufficient for making critical judgments about the effectiveness of an instructional strategy or curriculum.&#xA;&#xA;  Teachers need creativity, but they also need to demonstrate that they know what evidence is, and that they recognize that they practice in a profession based in behavioral science.&#xA;&#xA;One of the pitfalls in teaching is that we often rely too heavily on personal observation, rather than systematic empiricism that can move us past our subjective assumptions and more accurately surface the underlying causes of what we see.&#xA;&#xA;I suspect this is a why running records are such a strongly held practice, even in the midst of shifts to universal screening instruments and systematic phonics programs.&#xA;&#xA;Yet the Stanovichs end on a hopeful note that there is a researcher-teacher partnership that can be forged:&#xA;&#xA;  Teachers believe in a world that is predictable and controllable by manipulations that they use in their professional practice, just as scientists do. Researchers and educators are kindred spirits in their approach to knowledge, an important fact that can be used to forge a coalition to bring hard-won research knowledge to light in the classroom.&#xA;&#xA;This was a lot to unpack, and it sure took me some time to pick away at it, but I feel like I came away with a deeper understanding of research and how to consider it in the context of education.&#xA;&#xA;The crazy thing is, I feel like there’s still a lot I just touched the surface of, or missed entirely. But I’ve got other things calling to me to write about, and it’s time to move on. Thank you, Keith and Paula Stanovich, for sharing your wisdom in these gems — and I hope, dear reader, I’ve inspired you to take either article up and read them, or re-read them, as the case may be, to dig into them on your own.&#xA;&#xA;Over and out.&#xA;&#xA;#research #Stanovich #science #scienceofreading #SOR #empiricism #reading]]&gt;</description>
      <content:encoded><![CDATA[<p>This is Part IV in a series digging into two articles from Keith Stanovich that provides useful ways for educators to understand the <strong>science</strong> in the <em>science of reading</em>.</p>

<p><a href="https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of">In Part I</a>, we examined a 2003 article that proposed 5 different “styles” that can influence how science is conducted and perceived.</p>

<p><a href="https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of-z354">Since Part II</a>, we’ve been unpacking a long and stellar 2003 piece by Paula and Keith Stanovich, <a href="https://www.thereadingleague.org/wp-content/uploads/2018/10/Using-Research-Reason-Stanovich.pdf">“<em>Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research To Make Curricular &amp; Instructional Decisions</em>.”</a></p>

<p><strong>Today in Part IV</strong>, we continue onward deeper into the article to examine the oh-so very science-y aspects of <strong><em>experimental design</em></strong>.
</p>

<h1 id="the-logic-of-the-experimental-method" id="the-logic-of-the-experimental-method">The Logic of the Experimental Method</h1>

<p>So here’s two words that will cause an immediate and negative gut reaction to many involved in education: <strong>manipulation</strong> and <strong>control</strong>. Yet according to the Stanovichs, these sinister concepts are essential to experimental design:</p>

<blockquote><p>“The heart of the experimental method lies in manipulation and control. In contrast to a correlational study, where the investigator simply observes whether the natural fluctuation in two variables displays a relationship, the investigator in a true experiment manipulates the variable thought to be the cause (the independent variable) and looks for an effect on the variable thought to be the effect (the dependent variable) while holding all other variables constant by control and randomization.”</p></blockquote>

<p>It is interesting to think that there is a certain cold calculation to effective experimental design that may in fact present a barrier to executing more controlled studies in real schools. RCTs, for example, are incredibly difficult to do in schools because no one in a school in their right mind wants to withhold an intervention or resource from kids who are not currently receiving it that may be more effective. Which then makes having a control group really hard to sustain.</p>

<p>There’s a lot related to these concepts of <em>manipulation</em> and <em>control</em> that brings us back to <em>causal inferences</em> and the importance of attempting to <em>falsify</em> our hypotheses, which we briefly discussed <a href="https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of-mcfr">in Part III</a>. Frankly, I find all these seemingly simple concepts hard to fully grasp. It’s made me feel slightly better to see real scientists posting deep explorations of the complexity of making accurate causal inferences on my Twitter timeline.</p>

<p>I think in education we do have some grasp and application of these ideas. For example, concepts of team inquiry, problem-solving, and data-based decision-making are oriented around looking at student data and coming up with some kind of problem of practice and theory of change or hypothesis, then taking action steps to address it.</p>

<p>The Stanovichs point this out at the close of this paper, in fact:</p>

<blockquote><p>Effective teachers engage in scientific thinking in their classrooms in a variety of ways: when they assess and evaluate student performance, develop Individual Education Plans (IEPs) for their students with disabilities, reflect on their practice, or engage in action research.</p>

<p>This assessment cycle looks even more like the scientific method when teachers (as part of a multidisciplinary team) are developing and implementing an IEP for a student with a disability. The team must assess and evaluate the student’s learning strengths and difficulties, develop hypotheses about the learning problems, select curriculum goals and objectives, base instruction on the hypotheses and the goals selected, teach, and evaluate the outcomes of that teaching. If the teaching is successful (goals and objectives are attained), the cycle continues with new goals. If the teaching has been unsuccessful (goals and objectives have not been achieved), the cycle begins again with new hypotheses.</p></blockquote>

<p>Yet perhaps an area where we fall most short in terms of aligning more directly to the real scientific method is that we typically move confidently and brashly towards action without either strategically creating a group of students who do <em>not</em> receive the supports we think will be most effective, or considering in advance what evidence would both most clearly prove or falsify our theory of change. Instead, we often seem to default towards the notion that if we’ve taken action, we’ve achieved success. Everyone in education likes a good celebration, especially those in more political positions, but <a href="https://schoolecosystem.wordpress.com/2013/11/25/good-to-great/">“confronting the brutal facts”</a> in normed or standardized data seems to be a rarer exercise, other than as media headlines when the latest NAEP results come out. (For a model of an educator making this shift towards confront the brutal facts in literacy outcome data, please <a href="https://righttoreadproject.com/2021/11/08/data-the-closest-thing-we-have-to-a-crystal-ball/">read this blog from The Right to Read Project</a>).</p>

<p>In the tech world, <a href="https://en.wikipedia.org/wiki/A/B_testing">A/B testing</a> is a well-known and common undertaking. Maybe we need to keep simple study designs like this in mind and start smaller in our change endeavors. Maybe it’s not always about one whole-sail theory of change, but rather about having a few different theories that we can keep testing in a simple ways, with different students, classes, or groups. Or maybe the flipside is better? I’ve explored the idea that <a href="https://schoolecosystem.wordpress.com/2019/12/30/when-everyone-pulls-together-the-secrets-of-success-academy/">coherence within and beyond a school</a> may actually matter more than “research-based” practices elsewhere, so it’s an open question as to whether putting a firm directive out and asking everyone to get on board (more typical), versus opening up a space for teacher teams to try out different ways to meet a goal, is ultimately more effective in impacting collective outcomes for students.</p>

<h1 id="the-need-for-both-correlational-methods-and-true-experiments" id="the-need-for-both-correlational-methods-and-true-experiments">The Need for Both Correlational Methods and True Experiments</h1>

<p>To this end, the Stanovichs highlight a great point that there’s a need for multiple approaches to experimental design. We then must look across findings from these divergent approaches for <em>converging evidence</em>:</p>

<blockquote><p>It is necessary to amalgamate the results from not only experimental investigations, but correlational studies, nonequivalent control group studies, time series designs, and various other quasi-experimental designs and multivariate correlational designs, all have their strengths and weaknesses. For example, it is often (but not always) the case that experimental investigations are high in internal validity, but limited in external validity, whereas correlational studies are often high in external validity, but low in internal validity.</p>

<p>Convergence increases our confidence in the external and internal validity of our conclusions.</p></blockquote>

<h1 id="the-role-of-case-studies-and-qualitative-investigations" id="the-role-of-case-studies-and-qualitative-investigations">The Role of Case Studies and Qualitative Investigations</h1>

<p>I found this section of the paper, in which the Stanovichs explain how case studies and qualitative studies operate in conjunction with quantitative research, especially fascinating.</p>

<p>In their explanation, case studies and qualitative investigations are most useful as explorations of a problem. Once a theory is more fully formed, however, ensuring it can be stress tested in full to “rule out alternative explanations” requires the “comparative information” provided by quantitative experiments.</p>

<blockquote><p>Where qualitative investigations are useful relates strongly to a distinction in philosophy of science between the context of discovery and the context of justification. Qualitative research, case studies, and clinical observations support a context of discovery where, as Levin and O’Donnell (2000) note in an educational context, such research must be regarded as “preliminary/exploratory, observational, hypothesis generating.”</p>

<p>In education, however, investigators sometimes claim to be pursuing Objective B but slide over into Objective A without realizing they have made a crucial switch. They want to make comparative, or quantitative, statements, but have not carried out the proper types of investigation to justify them.</p>

<p>Case studies and qualitative description lack the comparative information necessary to prove that a particular theory or educational practice is superior, because they fail to test an alternative; they rule nothing out. Take the seminal work of Jean Piaget for example. His studies were critical in pointing developmental psychology in new and important directions, but many of his theoretical conclusions and causal explanations did not hold up in controlled experiments.</p></blockquote>

<p>Woah.</p>

<p>My thinking is that while case studies can be a good way to get an investigation of a problem started, I suspect they can also serve as a great bookend to the journey as well. In other words, I suspect that once a theory has some solid backing from converging evidence, drawing up case studies and gaining fuller context of its application in the real world with qualitative description can also be really beneficial.</p>

<p>So: a qualitative-quantitative-qualitative sandwich! I don’t know, I’m not a researcher. But people seem to like case studies, so it seems like a good communication and educative tool, in addition to the beginning of an exploration.</p>

<h1 id="teachers-and-researchers-commonality-in-a-what-works-epistemology" id="teachers-and-researchers-commonality-in-a-what-works-epistemology">Teachers and Researchers: Commonality in a “What Works” Epistemology</h1>

<p>The Stanoviches close this tour-de-force exposition with some connections between the thinking necessary for science and how it can be leveraged by teachers:</p>

<blockquote><p>Drawing upon personal experience is necessary and desirable in a veteran teacher, but it is not sufficient for making critical judgments about the effectiveness of an instructional strategy or curriculum.</p>

<p>Teachers need creativity, but they also need to demonstrate that they know what evidence is, and that they recognize that they practice in a profession based in behavioral science.</p></blockquote>

<p>One of the pitfalls in teaching is that we often rely too heavily on personal observation, rather than systematic empiricism that can move us past our subjective assumptions and more accurately surface the underlying causes of what we see.</p>

<p>I suspect this is a why running records are such a strongly held practice, even in the midst of shifts to universal screening instruments and systematic phonics programs.</p>

<p>Yet the Stanovichs end on a hopeful note that there is a researcher-teacher partnership that can be forged:</p>

<blockquote><p>Teachers believe in a world that is predictable and controllable by manipulations that they use in their professional practice, just as scientists do. Researchers and educators are kindred spirits in their approach to knowledge, an important fact that can be used to forge a coalition to bring hard-won research knowledge to light in the classroom.</p></blockquote>

<p>This was a lot to unpack, and it sure took me some time to pick away at it, but I feel like I came away with a deeper understanding of research and how to consider it in the context of education.</p>

<p>The crazy thing is, I feel like there’s still a lot I just touched the surface of, or missed entirely. But I’ve got other things calling to me to write about, and it’s time to move on. Thank you, Keith and Paula Stanovich, for sharing your wisdom in these gems — and I hope, dear reader, I’ve inspired you to take either article up and read them, or re-read them, as the case may be, to dig into them on your own.</p>

<p>Over and out.</p>

<p><a href="https://languageandliteracy.blog/tag:research" class="hashtag"><span>#</span><span class="p-category">research</span></a> <a href="https://languageandliteracy.blog/tag:Stanovich" class="hashtag"><span>#</span><span class="p-category">Stanovich</span></a> <a href="https://languageandliteracy.blog/tag:science" class="hashtag"><span>#</span><span class="p-category">science</span></a> <a href="https://languageandliteracy.blog/tag:scienceofreading" class="hashtag"><span>#</span><span class="p-category">scienceofreading</span></a> <a href="https://languageandliteracy.blog/tag:SOR" class="hashtag"><span>#</span><span class="p-category">SOR</span></a> <a href="https://languageandliteracy.blog/tag:empiricism" class="hashtag"><span>#</span><span class="p-category">empiricism</span></a> <a href="https://languageandliteracy.blog/tag:reading" class="hashtag"><span>#</span><span class="p-category">reading</span></a></p>
]]></content:encoded>
      <guid>https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of-22wt</guid>
      <pubDate>Sat, 10 Sep 2022 15:00:18 +0000</pubDate>
    </item>
    <item>
      <title>How you interpret “the science of reading” depends on how you think of “science”: Part III</title>
      <link>https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of-mcfr?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[The “science of reading” has become a loaded term — partly due to how “science” itself may be conceived. Since starting this series (yes, I know, I take a really long time to write posts), there’s been a fascinating trend of articles reacting to the term in various ways. These takes seem only slated to increase, given the wide attention this recent tidy overview on the push for SOR in Time has received, just as one example.&#xA;&#xA;In Part I, we examined a 2003 article by Keith Stanovich that proposed 5 different “styles” that can influence how science is conducted and perceived. In that article, we learned that in education there may be a tendency to lean towards “coherence” in narratives or the “uniqueness” presented by silver bullet fads. These tendencies can and do subvert science-based reading practice.&#xA;&#xA;In Part II, we began our analysis of yet another stellar 2003 piece by Paula and Keith Stanovich, which lays out the importance in drawing on the cumulative base of scientific findings on reading, rather than on gurus, personal agendas, and politics, as the field of education so often tends to. We learned that while peer reviewed research may not be a guarantee of quality, it is at the very least a minimum criterion that establishes such research as a part of the accumulating “public” realm of scientific knowledge.&#xA;&#xA;Today in Part III, we continue onward with the article from Part II, “Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research To Make Curricular &amp; Instructional Decisions,” as it is a lengthy one and there’s quite a bit more left to unpack.&#xA;!--more--&#xA;For example, the importance of an empirical approach to reading practice . . .&#xA;&#xA;Research-based Practice Relies on Systematic Empiricism&#xA;&#xA;There’s a lot of talk in education-related policy about research-based or evidence-based practice, but what does that mean? According to the Stanovichs, research-based practice is grounded in systematic empiricism.&#xA;&#xA;What is empiricism? Empiricism is knowledge derived from evidence; this is the basis of the scientific method. All empirical theories must be tested by real world observations, rather than drawn from philosophical musing or heart felt intuition.&#xA;&#xA;Basic, right? But the field of education is a realm as subject to political, bureaucratic, and ideological whims and sophistry as it is to taking action based on data from the students in front of us.&#xA;&#xA;Empiricism thus starts with observation, but according to the Stanovichs, it’s more than that:&#xA;&#xA;  Observation itself is fine and necessary, but pure, unstructured observation of the natural world will not lead to scientific knowledge. . . Scientific observation is termed systematic because it is structured so that the results of the observation reveal something about the underlying causal structure of events in the world. Observations are structured so that, depending upon the outcome of the observation, some theories of the causes of the outcome are supported and others rejected.&#34;&#xA;&#xA;It’s worth unpacking the term “causal” (easily mistaken for casual by casual readers) here, as it’s one of those academic terms used frequently by researchers and not so frequently by teachers.&#xA;&#xA;When we observe events in the real world, we primarily see the interwoven effects of many underlying factors. In order to disentangle and identify the specific causes of effects, researchers design tests to isolate variables that can allow them to make inferences—causal inferences—about complex and interrelated phenomenon.&#xA;&#xA;When it is claimed that one form of instruction is better than another (e.g. phonics vs. whole word instruction) this is a causal claim that can be tested systematically and empirically.&#xA;&#xA;When testing such claims using systemic empiricism and looking at the evidence base, there must be a space allowed for being wrong. According to the Stanovichs, this is called the “falsifiability criterion&#34;:&#xA;&#xA;  A scientific theory must always be stated in such a way that the predictions derived from it can potentially be shown to be false.&#xA;&#xA;A brief digression here on this concept of falsifiability: there are many within the “science of reading” camp—as with any other tribe in education—who can be overly strident in their claims about specific research-based practices. Examples are: sound walls, decodables, and multisensory instruction. Despite the coherence with existing converging evidence offered by each of these approaches, each has yet to be empirically proven as better than other approaches (I should add there is great debate about all of this, of course). This doesn’t mean they aren’t better — just that the peer reviewed evidence is not quite there to state more unequivocally that they are.&#xA;&#xA;We’ve examined related controversy on phonological awareness instruction here on this blog in the past.&#xA;&#xA;While this can be frustrating to those of us who wish to live in a clearly defined black-and-white world of proven approaches based on the “science of reading,” the deeper beauty of science is that it is completely agnostic as to anyone’s preferred outcomes (as with nature). The scientific truth lies on the ever shifting dune-face of peer reviewed evidence. This thus requires intellectual humility and the willingness to shift one’s beliefs in the face of a slowly accumulating evidence base.&#xA;&#xA;  Proponents of an educational practice should be asked for evidence; they should also be willing to admit that contrary data will lead them to abandon the practice. True scientific knowledge is held tentatively and is subject to change based on contrary evidence.&#xA;&#xA;So what does this mean for teachers? It doesn’t mean we shouldn’t test out sound walls or use decodables or go all in on multisensory instruction — it means that when we do, to recognize them more humbly, and honestly, as tests of a hypothesis. Teachers are quite familiar with the phenomenon of what works one period with one group of students bombing completely with the next one. What works is dependent on any number of given variables. It’s certainly worth strategically trying different ones until outcomes can cumulatively demonstrate the intended effect. This is what science is all about.&#xA;&#xA;Objectivity and Intellectual Honesty&#xA;&#xA;Philosopher Jonathan Adler (1998) teaches us that science values another aspect of open-mindedness even more highly: “What truly marks an open-minded person is the willingness to follow where evidence leads. . . Scientific method is attunement to the world, not to ourselves.”&#xA;&#xA;As the Stanovichs point out, scientists are also flawed human beings. They are individually no more objective than the rest of us. Yet, the greater endeavor of science countervails this fallibility with a “process of checks and balances.” This process is inherently social, in which scientists engage in peer critique of one another’s assumptions, biases, and conclusions.&#xA;&#xA;  “Purveyors of pseudoscientific educational practices fail the test of objectivity and are often identifiable by their attempts to do an “end run” around the public mechanisms of science by avoiding established peer review mechanisms and the information-sharing mechanisms that make replication possible. Instead, they attempt to promulgate their findings directly to consumers, such as teachers.”&#xA;&#xA;This is an important caution to bear in mind. We have an over-abundance of “direct to consumer” products and pitches in our field. Furthermore, even when something is research-based via clinical studies, there is still the issue of translation and implementation in the complex and complicated world of real schools and classrooms.&#xA;&#xA;Either way, in classrooms and schools we also need to rely heavily on our own peer review process: a social process of checks and balances delivered by our peers. Through dialogue and shared analysis of student data in teams, and via peer classroom intervisitations and feedback, we can get our assumptions and hypotheses checked.&#xA;&#xA;The Principle of Converging Evidence&#xA;&#xA;Research itself is hardly pristine, either. All individual studies are imperfect in their own way (and each should clearly outline their limitations) — but when taken collectively, they can provide robust conclusions.&#xA;&#xA;  “Scientists do not evaluate data from a single experiment that has finally been designed in the perfect way. They most often evaluate data from dozens of experiments, each containing some flaws but providing part of the answer.”&#xA;&#xA;This idea of converging evidence is critical, because research takes time to conduct, write up, and publish, and it’s easy to get taken up with the latest finding and lose sight of a wider body of evidence, most especially ones that were established by a previous generation. And it’s furthermore difficult to connect the dots between different journals and bodies of knowledge. Like any human endeavor, there are geographical, institutional, and social networks and gaps between scientists that make cross-disciplinary connections over time harder to make. Some of the best peer reviewed articles provide a comprehensive overview of the extant research, and meta-analysis can also be invaluable for sifting through the bricolage and collate the wheat from the chaff.&#xA;&#xA;This makes me think relatedly about how important it is to consider a variety of sources of information about our students — including talking directly to them and their families. I used to coordinate and write Individualized Education Plans (IEPs) for students in my building, and it was only after examining a students’ social history, multiple years of academic performance, multiple test scores across domains, considering the students’ behavior and performance across classrooms, examining writing and work samples, interviewing the student, and speaking with all of his or her teachers and service providers and family that I began to feel like I was getting somewhere in understanding their strengths and needs. One data point will tell you very little about a student. But with multiple forms of data, both qualitative and quantitative, we can tell a story from the converging evidence.&#xA;&#xA;More to Come&#xA;&#xA;Can you believe that we still aren’t done exploring Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research To Make Curricular &amp; Instructional Decisions by Paula and Keith Stanovich?!&#xA;&#xA;Nope. There’s that much good stuff in there to unpack.&#xA;&#xA;Before we wrap up, let’s review what we’ve covered thus far since Part I:&#xA;&#xA;The way we think about and perceive the world influences how we think about science.&#xA;Some ways of thinking about and perceiving the world lend themselves more readily to science.&#xA;Having different lenses for viewing the world can also be beneficial to science and we need to be flexible to allow for the complexity of the world.&#xA;Our beliefs are compelled by stories and we prefer to have our beliefs fit together coherently, but we need to also be able to change our beliefs to match the evidence. And scientists need to be able to tell better stories about what the evidence shows.&#xA;We are also easily compelled by flashy, unique new findings that can present a magic bullet, rather than by a firm evidence base.&#xA;Resolving disputes scientifically means being willing to resolve differences via mediated collaboration and peer reviewed, published findings.&#xA;Teachers need to become more familiar with peer reviewed findings rather than rely on the self-proclaimed expertise of gurus.&#xA;Peer review ensures a modicum of checks against pseudoscience and establishes the research as part of the realm of “public” knowledge.&#xA;The peer reviewed evidence base needs to converge in its findings across multiple imperfect studies to become established as robust research-based practice.&#xA;Systematic empiricism is the basis for testing hypotheses and establishing what is “research-based practice.”&#xA;We need to have the flexibility and intellectual humility to change our beliefs based on the converging evidence.&#xA;&#xA;To be continued in Part IV of our series. Thanks for joining me.&#xA;&#xA;#Stanovich #science #scienceofreading #SOR #research #empiricism&#xA;&#xA;a href=&#34;https://remark.as/p/languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of-mcfr&#34;Discuss.../a]]&gt;</description>
      <content:encoded><![CDATA[<p>*The “science of reading” has become a loaded term — partly due to how “science” itself may be conceived. Since <a href="https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of">starting this series</a> (yes, I <strong>know</strong>, I take a really long time to write posts), there’s been a fascinating trend of articles reacting to the term in various ways. These takes seem only slated to increase, given the wide attention <a href="https://time.com/6205084/phonics-science-of-reading-teachers/">this recent tidy overview</a> on the push for SOR in Time has received, just as one example.*</p>

<p><a href="https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of"><strong>In Part I</strong></a>, we examined a 2003 article by Keith Stanovich that proposed 5 different “styles” that can influence how science is conducted and perceived. In that article, we learned that in education there may be a tendency to lean towards “coherence” in narratives or the “uniqueness” presented by silver bullet fads. These tendencies can and do subvert science-based reading practice.</p>

<p><a href="https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of-z354"><strong>In Part II</strong></a>, we began our analysis of yet another stellar 2003 piece by Paula and Keith Stanovich, which lays out the importance in drawing on the cumulative base of scientific findings on reading, rather than on gurus, personal agendas, and politics, as the field of education so often tends to. We learned that while peer reviewed research may not be a guarantee of quality, it is at the very least a minimum criterion that establishes such research as a part of the accumulating “public” realm of scientific knowledge.</p>

<p><strong>Today in Part III</strong>, we continue onward with the article from Part II, <a href="https://www.thereadingleague.org/wp-content/uploads/2018/10/Using-Research-Reason-Stanovich.pdf">“<em>Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research To Make Curricular &amp; Instructional Decisions</em>,”</a> as it is a lengthy one and there’s quite a bit more left to unpack.

For example, the importance of an <strong>empirical</strong> approach to reading practice . . .</p>

<h1 id="research-based-practice-relies-on-systematic-empiricism" id="research-based-practice-relies-on-systematic-empiricism">Research-based Practice Relies on Systematic Empiricism</h1>

<p>There’s a lot of talk in education-related policy about <em>research-based</em> or <em>evidence-based</em> practice, but what does that mean? According to the Stanovichs, research-based practice is grounded in <em>systematic empiricism</em>.</p>

<p>What is <em>empiricism</em>? Empiricism is knowledge derived from evidence; this is the basis of the scientific method. All empirical theories must be tested by real world observations, rather than drawn from philosophical musing or heart felt intuition.</p>

<p>Basic, right? But the field of education is a realm as subject to political, bureaucratic, and ideological whims and sophistry as it is to taking action based on data from the students in front of us.</p>

<p>Empiricism thus starts with observation, but according to the Stanovichs, it’s more than that:</p>

<blockquote><p>Observation itself is fine and necessary, but pure, unstructured observation of the natural world will not lead to scientific knowledge. . . Scientific observation is termed systematic because it is structured so that the results of the observation reveal something about the underlying causal structure of events in the world. <strong>Observations are structured so that, depending upon the outcome of the observation, some theories of the causes of the outcome are supported and others rejected</strong>.”</p></blockquote>

<p>It’s worth unpacking the term “<em>causal</em>” (easily mistaken for casual by casual readers) here, as it’s one of those academic terms used frequently by researchers and not so frequently by teachers.</p>

<p>When we observe events in the real world, we primarily see the interwoven effects of many underlying factors. In order to disentangle and identify the specific causes of effects, researchers design tests to isolate variables that can allow them to make inferences—<em>causal</em> inferences—about complex and interrelated phenomenon.</p>

<p>When it is claimed that one form of instruction is better than another (e.g. phonics vs. whole word instruction) this is a causal claim that can be tested systematically and empirically.</p>

<p>When testing such claims using systemic empiricism and looking at the evidence base, there must be a space allowed for being wrong. According to the Stanovichs, this is called the “<em>falsifiability criterion</em>“:</p>

<blockquote><p>A scientific theory must always be stated in such a way that the predictions derived from it can potentially be shown to be false.</p></blockquote>

<p>A brief digression here on this concept of <em>falsifiability</em>: there are many within the “science of reading” camp—as with any other tribe in education—who can be overly strident in their claims about specific research-based practices. Examples are: sound walls, decodables, and multisensory instruction. Despite the <a href="https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of"><em>coherence</em></a> with existing converging evidence offered by each of these approaches, each has yet to be empirically proven as better than other approaches (I should add there is great debate about all of this, of course). This doesn’t mean they aren’t better — just that the peer reviewed evidence is not quite there to state more unequivocally that they are.</p>

<p>We’ve examined related controversy on <a href="https://languageandliteracy.blog/the-sound-and-the-fury-of-phonemes-and-reading">phonological awareness instruction</a> here on this blog in the past.</p>

<p>While this can be frustrating to those of us who wish to live in a clearly defined black-and-white world of proven approaches based on the “science of reading,” the deeper beauty of science is that it is completely agnostic as to anyone’s preferred outcomes (<a href="https://schoolecosystem.wordpress.com/2017/03/08/living-in-tune-with-nature-isnt-about-being-happy/">as with nature</a>). The scientific truth lies on the ever shifting dune-face of peer reviewed evidence. This thus requires intellectual humility and the willingness to shift one’s beliefs in the face of a slowly accumulating evidence base.</p>

<blockquote><p>Proponents of an educational practice should be asked for evidence; they should also be willing to admit that contrary data will lead them to abandon the practice. True scientific knowledge is held tentatively and is subject to change based on contrary evidence.</p></blockquote>

<p>So what does this mean for teachers? It doesn’t mean we shouldn’t test out sound walls or use decodables or go all in on multisensory instruction — it means that when we do, to recognize them more humbly, and honestly, as tests of a hypothesis. Teachers are quite familiar with the phenomenon of what works one period with one group of students bombing completely with the next one. What works is dependent on any number of given variables. It’s certainly worth strategically trying different ones until outcomes can cumulatively demonstrate the intended effect. This is what science is all about.</p>

<h1 id="objectivity-and-intellectual-honesty" id="objectivity-and-intellectual-honesty">Objectivity and Intellectual Honesty</h1>

<p>Philosopher Jonathan Adler (1998) teaches us that science values another aspect of open-mindedness even more highly: “What truly marks an open-minded person is the willingness to follow where evidence leads. . . <strong>Scientific method is attunement to the world, not to ourselves</strong>.”</p>

<p>As the Stanovichs point out, scientists are also flawed human beings. They are individually no more objective than the rest of us. Yet, the greater endeavor of science countervails this fallibility with a “process of checks and balances.” This process is inherently social, in which scientists engage in peer critique of one another’s assumptions, biases, and conclusions.</p>

<blockquote><p>“Purveyors of pseudoscientific educational practices fail the test of objectivity and are often identifiable by their attempts to do an “end run” around the public mechanisms of science by avoiding established peer review mechanisms and the information-sharing mechanisms that make replication possible. Instead, they attempt to promulgate their findings directly to consumers, such as teachers.”</p></blockquote>

<p>This is an important caution to bear in mind. We have an over-abundance of “direct to consumer” products and pitches in our field. Furthermore, even when something <em>is</em> research-based via clinical studies, there is still the issue of translation and implementation in the complex and complicated world of real schools and classrooms.</p>

<p>Either way, in classrooms and schools we also need to rely heavily on our own peer review process: a social process of checks and balances delivered by our peers. Through dialogue and shared analysis of student data in teams, and via peer classroom intervisitations and feedback, we can get our assumptions and hypotheses checked.</p>

<h1 id="the-principle-of-converging-evidence" id="the-principle-of-converging-evidence">The Principle of Converging Evidence</h1>

<p>Research itself is hardly pristine, either. All individual studies are imperfect in their own way (and each should clearly outline their limitations) — but when taken collectively, they can provide robust conclusions.</p>

<blockquote><p>“Scientists do not evaluate data from a single experiment that has finally been designed in the perfect way. They most often evaluate data from dozens of experiments, each containing some flaws but providing part of the answer.”</p></blockquote>

<p>This idea of converging evidence is critical, because research takes time to conduct, write up, and publish, and it’s easy to get taken up with the latest finding and lose sight of a wider body of evidence, most especially ones that were established by a previous generation. And it’s furthermore difficult to connect the dots between different journals and bodies of knowledge. Like any human endeavor, there are geographical, institutional, and social networks and gaps between scientists that make cross-disciplinary connections over time harder to make. Some of the best peer reviewed articles provide a comprehensive overview of the extant research, and meta-analysis can also be invaluable for sifting through the bricolage and collate the wheat from the chaff.</p>

<p>This makes me think relatedly about how important it is to consider a variety of sources of information about our students — including talking directly to them and their families. I used to coordinate and write Individualized Education Plans (IEPs) for students in my building, and it was only after examining a students’ social history, multiple years of academic performance, multiple test scores across domains, considering the students’ behavior and performance across classrooms, examining writing and work samples, interviewing the student, and speaking with all of his or her teachers and service providers and family that I began to feel like I was getting somewhere in understanding their strengths and needs. One data point will tell you very little about a student. But with multiple forms of data, both qualitative and quantitative, we can tell a story from the converging evidence.</p>

<h1 id="more-to-come" id="more-to-come">More to Come</h1>

<p>Can you believe that we <em>still</em> aren’t done exploring <a href="https://www.thereadingleague.org/wp-content/uploads/2018/10/Using-Research-Reason-Stanovich.pdf"><em>Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research To Make Curricular &amp; Instructional Decisions</em></a> by Paula and Keith Stanovich?!</p>

<p>Nope. There’s that much good stuff in there to unpack.</p>

<p>Before we wrap up, let’s review what we’ve covered thus far since Part I:</p>
<ul><li>The way we think about and perceive the world influences how we think about science.</li>
<li>Some ways of thinking about and perceiving the world lend themselves more readily to science.</li>
<li>Having different lenses for viewing the world can also be beneficial to science and we need to be flexible to allow for the complexity of the world.</li>
<li>Our beliefs are compelled by stories and we prefer to have our beliefs fit together coherently, but we need to also be able to change our beliefs to match the evidence. And scientists need to be able to tell better stories about what the evidence shows.</li>
<li>We are also easily compelled by flashy, unique new findings that can present a magic bullet, rather than by a firm evidence base.</li>
<li>Resolving disputes scientifically means being willing to resolve differences via mediated collaboration and peer reviewed, published findings.</li>
<li>Teachers need to become more familiar with peer reviewed findings rather than rely on the self-proclaimed expertise of gurus.</li>
<li>Peer review ensures a modicum of checks against pseudoscience and establishes the research as part of the realm of “public” knowledge.</li>
<li>The peer reviewed evidence base needs to converge in its findings across multiple imperfect studies to become established as robust research-based practice.</li>
<li>Systematic empiricism is the basis for testing hypotheses and establishing what is “research-based practice.”</li>
<li>We need to have the flexibility and intellectual humility to change our beliefs based on the converging evidence.</li></ul>

<p>To be continued in Part IV of our series. Thanks for joining me.</p>

<p><a href="https://languageandliteracy.blog/tag:Stanovich" class="hashtag"><span>#</span><span class="p-category">Stanovich</span></a> <a href="https://languageandliteracy.blog/tag:science" class="hashtag"><span>#</span><span class="p-category">science</span></a> <a href="https://languageandliteracy.blog/tag:scienceofreading" class="hashtag"><span>#</span><span class="p-category">scienceofreading</span></a> <a href="https://languageandliteracy.blog/tag:SOR" class="hashtag"><span>#</span><span class="p-category">SOR</span></a> <a href="https://languageandliteracy.blog/tag:research" class="hashtag"><span>#</span><span class="p-category">research</span></a> <a href="https://languageandliteracy.blog/tag:empiricism" class="hashtag"><span>#</span><span class="p-category">empiricism</span></a></p>

<p><a href="https://remark.as/p/languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of-mcfr">Discuss...</a></p>
]]></content:encoded>
      <guid>https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of-mcfr</guid>
      <pubDate>Wed, 31 Aug 2022 14:52:19 +0000</pubDate>
    </item>
    <item>
      <title>How you interpret “the science of reading” depends on how you think of “science”: Part II</title>
      <link>https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of-z354?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[The “science of reading” has become a loaded term — partly due to how “science” itself is conceived.&#xA;&#xA;In Part I, we examined a 2003 article by Keith Stanovich that proposed 5 different “styles” that can influence how science is conducted and perceived. In that article, we learned that in education there may be a tendency to lean towards “coherence” in narratives or the “uniqueness” of silver bullet fads. These tendencies can subvert science-based reading practice.&#xA;&#xA;In Part II, we will look at yet another stellar 2003 piece by Paula and Keith Stanovich titled, “Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research To Make Curricular &amp; Instructional Decisions.&#34;&#xA;!--more--&#xA;A tip of the hat to the Reading League for this source.&#xA;&#xA;In this dense and lengthy article, the Stanovichs examine how classroom teachers can leverage an understanding of the existing evidence base and of the scientific process at large to inform their practice. It’s so lengthy that I’m going to break it up into more than one post!&#xA;&#xA;There’s a ream of great pull-quotes in this piece, such as the following:&#xA;&#xA;  “As professionals, teachers can become more effective and powerful by developing the skills to recognize scientifically based practice and, when the evidence is not available, use some basic research concepts to draw conclusions on their own.”&#xA;&#xA;I will readily admit that I am not very scientifically literate. The methods section of a research paper makes my eyes glaze over, and you’ve lost me at “dependent variable.” As a result, I find much of what is put forward by the Stanovichs in this paper enlightening. They outline what I’m sure are basic stuff for some, but I appreciated these fundamentals, and I also noticed that the authors suggest — here and in their other paper — that scientists themselves can have trouble with these principles.&#xA;&#xA;In the introduction to the paper, the Stanovichs make a strong argument for the need for greater scientific understanding amongst teachers. It is a matter, they claim, of “professional autonomy,” as in an environment in which “anything goes,” the result is “a fertile environment for gurus to sell untested educational ‘remedies’ that are not supported by an established research base.”&#xA;&#xA;Yep. The field of education is most definitely a fertile environment for both self-proclaimed and widely venerated “gurus.”&#xA;&#xA;  “The ‘anything goes’ mentality actually represents a threat to teachers’ professional autonomy. It provides a fertile environment for gurus to sell untested educational “remedies” that are not supported by an established research base.”&#xA;&#xA;Education has therefore become run by political disputes and personal agendas rather than scientific results:&#xA;&#xA;  “A vast literature has been generated on best practices that foster children’s reading acquisition…Yet much of this literature remains unknown to many teachers, contributing to the frustrating lack of clarity about accepted, scientifically validated findings and conclusions on reading acquisition.”&#xA;&#xA;  “The field’s failure to ground practice in the attitudes and values of science has made educators susceptible to the ‘authority syndrome’ as well as fads and gimmicks that ignore evidence-based practice.”&#xA;&#xA;The Stanovichs therefore recommend training teachers to apply some basic scientific criteria for evaluating knowledge, which “could easily be included in initial teacher preparation programs.”&#xA;&#xA;These criteria include:&#xA;&#xA;“the publication of findings in refereed journals (scientific publications that employ a process of peer review),&#xA;the duplication of the results by other investigators, and&#xA;a consensus within a particular research community on whether there is a critical mass of studies that point toward a particular conclusion”&#xA;&#xA;These may seem disarmingly simple, but the reality is that even the basic gate of evaluating current evidence on reading practice from the standpoint of peer review could do a lot to clear up a few misconceptions out there.&#xA;&#xA;Publicly Verifiable Research Conclusions&#xA;&#xA;We know the peer review process ain’t bullet proof, but it is a minimal guarantee that information will not be pseudoscience.&#xA;&#xA;  “Peer review is a minimal criterion, not a stringent one.”&#xA;&#xA;Furthermore — and here I think the Stanovichs illuminate what is the most powerful aspect of peer review — it removes the status of any “special” knowledge attributed to such “gurus” aforementioned.&#xA;&#xA;  “Research-based conclusions, when published in a peer reviewed journal, become part of the public realm, available to all, in a way that claims of “special expertise” are not.”&#xA;&#xA;What the Stanovichs mean by “public” is not accessibility to the general public so much as that the research has been published as part of the greater effort of scientific consilience to build a body of knowledge that can be tested and verified.&#xA;&#xA;Many educators believe that knowledge resides within particular individuals—with particularly elite insights—who then must be called upon to dispense this knowledge to others. . .&#xA;&#xA;Science, however, with its conception of publicly verifiable knowledge, actually democratizes knowledge. It frees practitioners and researchers from slavish dependence on authority.&#xA;&#xA;Science can democratize knowledge. What a beautiful idea. It can free us from the Lucy Calkinses of the ed world and allow any and all to understand it — so long as we make the effort to grasp and verify it.&#xA;&#xA;  “Empirical science, by generating knowledge and moving it into the public domain, is a liberating force. Teachers can consult the research and decide for themselves whether the state of the literature is as the expert portrays it.”&#xA;&#xA;This made me think of the work of journalist Emily Hanford, who has done our field the invaluable service of synthesizing decades of reading science into more readily digestible forms for the general public and for educators. Interestingly, her name has become almost synonymous with “the science of reading,” and, like the term, can raise a few hackles. But don’t read her as an “expert” on the matter. Review the extensive footnotes provided by her pieces–and beyond–and then decide for yourself.&#xA;&#xA;But how do we know, beyond peer review, what makes something “research-based”? What the heck is science, anyway?&#xA;&#xA;We’ll examine the Stanovich’s answers to these questions in Part III! Stay tuned.&#xA;&#xA;#reading #research #literacy #empiricism #peerreview #science #scienceofreading #SOR #Stanovich&#xA;&#xA;a href=&#34;https://remark.as/p/languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of-z354&#34;Discuss.../a]]&gt;</description>
      <content:encoded><![CDATA[<p>The “science of reading” has become a loaded term — partly due to how “science” itself is conceived.</p>

<p><a href="https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of">In Part I</a>, we examined a 2003 article by Keith Stanovich that proposed 5 different “styles” that can influence how science is conducted and perceived. In that article, we learned that in education there may be a tendency to lean towards “coherence” in narratives or the “uniqueness” of silver bullet fads. These tendencies can subvert science-based reading practice.</p>

<p>In Part II, we will look at yet another stellar 2003 piece by Paula and Keith Stanovich titled, “<a href="https://www.thereadingleague.org/wp-content/uploads/2018/10/Using-Research-Reason-Stanovich.pdf"><em>Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research To Make Curricular &amp; Instructional Decisions</em></a>.”

A tip of the hat to <a href="https://www.thereadingleague.org/knowledge-base/">the Reading League</a> for this source.</p>

<p>In this dense and lengthy article, the Stanovichs examine how classroom teachers can leverage an understanding of the existing evidence base and of the scientific process at large to inform their practice. It’s so lengthy that I’m going to break it up into more than one post!</p>

<p>There’s a ream of great pull-quotes in this piece, such as the following:</p>

<blockquote><p>“As professionals, teachers can become more effective and powerful by developing the skills to recognize scientifically based practice and, when the evidence is not available, use some basic research concepts to draw conclusions on their own.”</p></blockquote>

<p>I will readily admit that I am not very scientifically literate. The methods section of a research paper makes my eyes glaze over, and you’ve lost me at “dependent variable.” As a result, I find much of what is put forward by the Stanovichs in this paper enlightening. They outline what I’m sure are basic stuff for some, but I appreciated these fundamentals, and I also noticed that the authors suggest — here and <a href="https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of">in their other paper</a> — that scientists themselves can have trouble with these principles.</p>

<p>In the introduction to the paper, the Stanovichs make a strong argument for the need for greater scientific understanding amongst teachers. It is a matter, they claim, of “professional autonomy,” as in an environment in which “anything goes,” the result is “a fertile environment for gurus to sell untested educational ‘remedies’ that are not supported by an established research base.”</p>

<p>Yep. The field of education is most definitely a fertile environment for both self-proclaimed and widely venerated “gurus.”</p>

<blockquote><p>“The ‘anything goes’ mentality actually represents a threat to teachers’ professional autonomy. It provides a fertile environment for gurus to sell untested educational “remedies” that are not supported by an established research base.”</p></blockquote>

<p>Education has therefore become run by political disputes and personal agendas rather than scientific results:</p>

<blockquote><p>“A vast literature has been generated on best practices that foster children’s reading acquisition…Yet much of this literature remains unknown to many teachers, contributing to the frustrating lack of clarity about accepted, scientifically validated findings and conclusions on reading acquisition.”</p>

<p>“The field’s failure to ground practice in the attitudes and values of science has made educators susceptible to the ‘authority syndrome’ as well as fads and gimmicks that ignore evidence-based practice.”</p></blockquote>

<p>The Stanovichs therefore recommend training teachers to apply some basic scientific criteria for evaluating knowledge, which “could easily be included in initial teacher preparation programs.”</p>

<p>These criteria include:</p>
<ol><li>“the publication of findings in refereed journals (scientific publications that employ a process of peer review),</li>
<li>the duplication of the results by other investigators, and</li>
<li>a consensus within a particular research community on whether there is a critical mass of studies that point toward a particular conclusion”</li></ol>

<p>These may seem disarmingly simple, but the reality is that even the basic gate of evaluating current evidence on reading practice from the standpoint of peer review could do a lot to clear up a few misconceptions out there.</p>

<h1 id="publicly-verifiable-research-conclusions" id="publicly-verifiable-research-conclusions">Publicly Verifiable Research Conclusions</h1>

<p>We know the peer review process ain’t bullet proof, but it is a minimal guarantee that information will not be pseudoscience.</p>

<blockquote><p>“Peer review is a minimal criterion, not a stringent one.”</p></blockquote>

<p>Furthermore — and here I think the Stanovichs illuminate what is the most powerful aspect of peer review — it removes the status of any “special” knowledge attributed to such “gurus” aforementioned.</p>

<blockquote><p>“Research-based conclusions, when published in a peer reviewed journal, become part of the public realm, available to all, in a way that claims of “special expertise” are not.”</p></blockquote>

<p>What the Stanovichs mean by “public” is not accessibility to the general public so much as that the research has been published as part of the greater effort of scientific consilience to build a body of knowledge that can be tested and verified.</p>

<p>Many educators believe that knowledge resides within particular individuals—with particularly elite insights—who then must be called upon to dispense this knowledge to others. . .</p>

<p>Science, however, with its conception of publicly verifiable knowledge, actually democratizes knowledge. It frees practitioners and researchers from slavish dependence on authority.</p>

<p>Science can democratize knowledge. What a beautiful idea. It can free us from the Lucy Calkinses of the ed world and allow any and all to understand it — so long as we make the effort to grasp and verify it.</p>

<blockquote><p>“Empirical science, by generating knowledge and moving it into the public domain, is a liberating force. Teachers can consult the research and decide for themselves whether the state of the literature is as the expert portrays it.”</p></blockquote>

<p>This made me think of the work of journalist Emily Hanford, who has done our field the invaluable service of synthesizing decades of reading science into more readily digestible forms for the general public and for educators. Interestingly, her name has become almost synonymous with “the science of reading,” and, like the term, can raise a few hackles. But don’t read her as an “expert” on the matter. Review <a href="https://www.apmreports.org/episode/2020/08/06/what-the-words-say">the extensive footnotes</a> provided by her pieces–and beyond–and then decide for yourself.</p>

<p>But how do we know, beyond peer review, what makes something “research-based”? What the heck is science, anyway?</p>

<p>We’ll examine the Stanovich’s answers to these questions in Part III! Stay tuned.</p>

<p><a href="https://languageandliteracy.blog/tag:reading" class="hashtag"><span>#</span><span class="p-category">reading</span></a> <a href="https://languageandliteracy.blog/tag:research" class="hashtag"><span>#</span><span class="p-category">research</span></a> <a href="https://languageandliteracy.blog/tag:literacy" class="hashtag"><span>#</span><span class="p-category">literacy</span></a> <a href="https://languageandliteracy.blog/tag:empiricism" class="hashtag"><span>#</span><span class="p-category">empiricism</span></a> <a href="https://languageandliteracy.blog/tag:peerreview" class="hashtag"><span>#</span><span class="p-category">peerreview</span></a> <a href="https://languageandliteracy.blog/tag:science" class="hashtag"><span>#</span><span class="p-category">science</span></a> <a href="https://languageandliteracy.blog/tag:scienceofreading" class="hashtag"><span>#</span><span class="p-category">scienceofreading</span></a> <a href="https://languageandliteracy.blog/tag:SOR" class="hashtag"><span>#</span><span class="p-category">SOR</span></a> <a href="https://languageandliteracy.blog/tag:Stanovich" class="hashtag"><span>#</span><span class="p-category">Stanovich</span></a></p>

<p><a href="https://remark.as/p/languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of-z354">Discuss...</a></p>
]]></content:encoded>
      <guid>https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of-z354</guid>
      <pubDate>Tue, 19 Jul 2022 14:39:35 +0000</pubDate>
    </item>
    <item>
      <title>How you interpret “the science of reading” depends on how you think of “science”: Part I</title>
      <link>https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I’ve observed an interesting divide in how people react to and interpret the term “the science of reading” (or “SOR” for short).&#xA;&#xA;For some, the term elicits eager head nodding — it’s even become incorporated into the sales pitch of many a vendor of education products. For others, the term elicits a gut reaction akin to disgust.&#xA;&#xA;There’s a lot wrapped up in how someone may think of “science” at large that then influences their reactions to the term of the “science of reading.” But don’t just take my word for it. Keith and Paula Stanovich penned some really insightful pieces about this in the early 2000s, and outlined how educators can understand and leverage science to inform their own instructional practice.&#xA;!--more--&#xA;I’m going to tackle two pieces by them in two different blog posts. In Part I here, we will tackle the “Styles of Science,” from a 2003 piece by Keith Stanovich titled “Understanding the Styles of Science in the Study of Reading” in the Scientific Studies of Reading.&#xA;&#xA;A warm hat tip to Chris Schatschneider for this article!&#xA;&#xA;In this article, Stanovich lays out 5 “styles” of science that colors how the overall term is understood.&#xA;&#xA;Correspondence vs Coherence&#xA;Analytic Reductionism vs Holism&#xA;Probabilistic Prediction vs Case-based Approach&#xA;Robust-Process Explanations vs Actual-Sequence Explanations&#xA;Consilience vs Uniqueness&#xA;&#xA;Not the pithiest breakdown in the world, but there’s some great quotes in here, as well as some useful frames for understanding perspectives on science.&#xA;&#xA;Correspondence vs Coherence&#xA;&#xA;This may be the most useful distinction.&#xA;&#xA;A correspondence view is the truly scientific one: it means using objective data to form and test theories, which is the basis for the scientific method. A coherence view, on the other hand, is our innate predilection to create and respond to compelling narratives about the world.&#xA;&#xA;When it comes to reading research, consider how initial theories were coherence based — they established a narrative that fit the observational data that reading is primarily a “visual” skill in which whole words are recognized. But counterintuitively, empirical testing that better corresponded to reading has clearly demonstrated that reading is primarily a rapid connection of letter and letter-groups to their sounds.&#xA;&#xA;Ken Goodman’s entire oeuvre was based on a coherence approach — and his narratives continue to retain a strong grip on educators. And arguments about advanced phonemic awareness continue apace as we speak — much of which is currently based more on an overarching theory and synthesis of research rather than on corresponding empirical studies.&#xA;&#xA;On what a “correspondence” perspective entails:&#xA;&#xA;  “Simply, there is a real world out there that exists independently of our beliefs about it, researchers form theories about this world, and the theories that track the world best are closer to the truth and are thus a better basis for action. This is why planes don’t fall out of the sky, why bridges rarely collapse, and why my headache medication works more often than not.”&#xA;&#xA;On what a “coherence” perspective entails:&#xA;&#xA;  “many in the qualitative research communities emphasize constructivist principles that put stress on the requirement that beliefs fit together in a reasonably logical way—the so-called coherence theories of truth.”&#xA;&#xA;  . . . “numerous authors have written about how the coherence doctrine, by linking itself with ecumenical notions such as tolerance and personal validation, obscures its uglier aspects. What has been obscured is how indiscriminate belief validation, with no check in external reality, creates a world that most of us would consider a nightmare. In this world, the witnesses and evidence in a jury trial are not sifted as to credibility because any piece of evidence put forward is equal to any other for the reason that all are valid by someone’s perspective.”&#xA;&#xA;And on the resulting clash between these two views:&#xA;&#xA;  “. . . an extreme adherence to a correspondence theory of truth often necessitates the frustration of the strong human need for narrative coherence in explanation.”&#xA;&#xA;  “the explanatory frameworks that are generated by scientists working in the correspondence framework may not seem plausible to those who value coherence more, particularly the type of coherence that resonates with the narratives inherent in folk psychology.”&#xA;&#xA;Analytic Reductionism vs Holism&#xA;&#xA;While Stanovich makes it clear that correspondence more clearly aligns with science than coherence, some of these other styles outlined rely on a healthy balance, such as here in analytic reductionism vs holism. Both views have their downfalls, and both are needed as a complement to the other.&#xA;&#xA;Analytic reductionism means breaking things down into small parts that can be studied individually — this is the bread and butter of empirical research. Holism, on the other hand, means viewing small parts within their greater context and whole. I’ve spent a fair amount of time arguing the need to view individual schools from a more holistic view, and I currently spent much of my time (in my day job) arguing about the need to view individual student data from a more holistic view, as well. But as Stanovich argues here, the translation of reading science to the classroom benefits from both approaches applied judiciously:&#xA;&#xA;  “Gone are the days when such investigations were couched as if comparing a disembodied mind interacting with a disembodied orthography. Investigators in this area appreciate the necessity of adding the learning environment and instructional context as interacting factors in the model of orthography and achievement links that is being developed. This area displays an additive holism rather than the subtractive holism that has soured so many scientists on that end of the analytic and holistic continuum.”&#xA;&#xA;Probabilistic Prediction vs Case-based Approach&#xA;&#xA;I found this distinction really useful as well, and very much related to the correspondence vs coherence dynamic. Researchers who conduct empirical studies will be more accustomed to probabilistic thinking, but for most of us laypeople, we are compelled by case studies.&#xA;&#xA;  “We in the behavioral science of the study of reading are so accustomed to probabilistic explanation and prediction that we are prone to forget how alien it seems to the layperson, to teachers, and indeed to some working in our own field.”&#xA;&#xA;Stanovich makes a useful distinction in terms of understanding reading research, which is to understand that probabilistic prediction applies to aggregate data and decision-making, and that outlier cases will always be found — which can lead to some educators writing off the implications of aggregate data.&#xA;&#xA;That said, both approaches have their utility and need to be wielded strategically:&#xA;&#xA;  “In many cases though, in the complicated field of reading—in its domains of application—it really is unclear whether we should be in probabilistic or&#xA;case-based mode”&#xA;&#xA;Robust-Process Explanations vs Actual-Sequence Explanations&#xA;&#xA;This is another one in which both styles have an important function in translating reading science into practice. Robust-process explanations are based on theoretical principles which can apply across different examples, whereas actual-sequence explanations apply to specific cases that have happened.&#xA;&#xA;  “Those of us studying the psychology of the reading process are often after robust-process explanations, whereas we often address audiences who are interested in and oriented toward actual-sequence explanations. Teachers often want to know how this particular child reached this level of school achievement or this level of reading difficulty, as the case may be.”&#xA;&#xA;Both of these are useful ways of getting at the truth.&#xA;&#xA;  “By subdividing robust-process explanations we get closer to actual-sequence explanations, and by aggregating actual-sequence explanations we get closer to a robust-process explanation. The two work in concert.”&#xA;&#xA;Consilience vs Uniqueness&#xA;&#xA;E.O. Wilson defines consilience as the “unification of knowledge by the linking of facts and fact-based theory across disciplines to create a common groundwork of explanation.” Such efforts are important in reading research, as robust theories draw upon different fields such as “connectionist modeling, cognitive neuroscience, and classroom studies of effective practice.”&#xA;&#xA;Uniqueness, on the other hand, refers to a tendency to look for the flashy new thing that stands out and excites people.&#xA;&#xA;  “This concern for consilience contrasts with the faddish tendencies in the field of education to search for magic bullets and miracle cures deriving from theories that do not cohere with the knowledge being developed by allied disciplines. The quest for a magic bullet always tempts education to stray from valuing consilience.”&#xA;&#xA;Like correspondence vs coherence, Stanovich positions consilience as far more firmly on the side of science.&#xA;&#xA;What styles are scientific?&#xA;&#xA;scientific styles&#xA;&#xA;Somewhat unscientifically, since none of these “styles” are based on quantitative research, Stanovich guesstimates how these different styles can be applied in varying ways and still be considered scientific. As you can see, the two styles that present that greatest challenge, in his view, to the application of reading science are the tendency of many in the education field to lean into coherence and uniqueness in their views of reading.&#xA;&#xA;That said, Stanovich is not trying to enforce a rigid perspective of science here — he believes these other styles can work together, and he also warns against rigidity — which is a warning I think many who would consider themselves “Science of Reading” folks would do well to heed:&#xA;&#xA;  “I do fear that some would prefer rigid rules and bullet points on a PowerPoint presentation rather than the story of science in its full complexity—including the complexities of certain styles that are neither right nor wrong but represent continua (better viewed as parameters that we are constantly adjusting so as to facilitate the process). Science is a delicate epistemological game.”&#xA;&#xA;  “Science’s real uniqueness comes from its self-correcting nature. Its unique epistemic power comes from a very un-Promethean characteristic: its constant fiddling with things—with theory, experimental setups, techniques, and its styles of the type I have discussed. And we are not afraid to readjust—which itself, recursively, is one of science’s characteristic features—we are not afraid to implicitly admit previous error when we make a readjustment.”&#xA;&#xA;  “I fear that this flexibility, this juggling, this self-corrective mindset will be lost if we too rigidly reduce science to a set of rules. Do not misunderstand me, though—I am wholeheartedly in favor of instructing teachers and other educational personnel in what science is and is not and what are the unique features that underlie its epistemic power. But these features should not become a prison.&#34;&#xA;&#xA;There’s been a lot of recent “scholarly disputes” about reading research, and Stanovich raises a procedure that would be nice to see used more: “adversarial collaboration.” In this process, the disputants agree to an arbiter, who then works with the disputants to design experiments that can test the theories under dispute. This research is then published by the arbiter with the disputants as co-authors.&#xA;&#xA;What a brilliant way to tackle some of the current debates about phonemic awareness!&#xA;&#xA;  “In the middle of the heat of the internecine warfare that sometimes takes place in our field, it is important to be able to differentiate legitimate scientific criticism that derives from background assumptions from the opposite ends of the style dimensions I have discussed here—and criticisms deriving from critics who are not playing the game of science at all.”&#xA;&#xA;  “Willingness to accept offers of adversarial collaboration might be a tool to use in distinguishing who is playing science from who is not.&#xA;&#xA;This approach of adversarial collaboration has been used recently to advance the research into cognition. Why don&#39;t we do more of this in literacy research?&#xA;&#xA;#Stanovich #research #empiricism #literacy #reading #science #scienceofreading #SOR&#xA;&#xA;a href=&#34;https://remark.as/p/languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of&#34;Discuss.../a]]&gt;</description>
      <content:encoded><![CDATA[<p>I’ve observed an interesting divide in how people react to and interpret the term “<em>the science of reading</em>” (or “SOR” for short).</p>

<p>For some, the term elicits eager head nodding — it’s even become incorporated into the sales pitch of many a vendor of education products. For others, the term elicits a gut reaction akin to disgust.</p>

<p>There’s a lot wrapped up in how someone may think of “science” at large that then influences their reactions to the term of the “science of reading.” But don’t just take my word for it. Keith and Paula Stanovich penned some really insightful pieces about this in the early 2000s, and outlined how educators can understand and leverage science to inform their own instructional practice.

I’m going to tackle two pieces by them in two different blog posts. In Part I here, we will tackle the “Styles of Science,” from a 2003 piece by Keith Stanovich titled “Understanding the Styles of Science in the Study of Reading” in the Scientific Studies of Reading.</p>

<p>A warm hat tip to <a href="https://x.com/schotz/status/1528425904778760193?s=20">Chris Schatschneider</a> for this article!</p>

<p>In this article, Stanovich lays out 5 “styles” of science that colors how the overall term is understood.</p>
<ol><li>Correspondence vs Coherence</li>
<li>Analytic Reductionism vs Holism</li>
<li>Probabilistic Prediction vs Case-based Approach</li>
<li>Robust-Process Explanations vs Actual-Sequence Explanations</li>
<li>Consilience vs Uniqueness</li></ol>

<p>Not the pithiest breakdown in the world, but there’s some great quotes in here, as well as some useful frames for understanding perspectives on science.</p>

<h1 id="correspondence-vs-coherence" id="correspondence-vs-coherence">Correspondence vs Coherence</h1>

<p>This may be the most useful distinction.</p>

<p>A correspondence view is the truly scientific one: it means using objective data to form and test theories, which is the basis for the scientific method. A coherence view, on the other hand, is our innate predilection to create and respond to <a href="https://languageandliteracy.blog/the-danger-of-the-story-of-white-displacement">compelling narratives</a> about the world.</p>

<p>When it comes to reading research, consider how initial theories were coherence based — they established a narrative that fit the observational data that reading is primarily a “visual” skill in which whole words are recognized. But counterintuitively, empirical testing that better corresponded to reading has clearly demonstrated that reading is primarily a rapid connection of letter and letter-groups to their sounds.</p>

<p>Ken Goodman’s entire oeuvre was based on a coherence approach — and <a href="https://languageandliteracy.blog/learning-to-read-is-natural-so-claim-the-goodmans">his narratives</a> continue to retain a strong grip on educators. And arguments about <a href="https://languageandliteracy.blog/the-sound-and-the-fury-of-phonemes-and-reading">advanced phonemic awareness</a> continue apace as we speak — much of which is currently based more on an overarching theory and synthesis of research rather than on corresponding empirical studies.</p>

<p>On what a “correspondence” perspective entails:</p>

<blockquote><p>“Simply, there is a real world out there that exists independently of our beliefs about it, researchers form theories about this world, and the theories that track the world best are closer to the truth and are thus a better basis for action. This is why planes don’t fall out of the sky, why bridges rarely collapse, and why my headache medication works more often than not.”</p></blockquote>

<p>On what a “coherence” perspective entails:</p>

<blockquote><p>“many in the qualitative research communities emphasize constructivist principles that put stress on the requirement that beliefs fit together in a reasonably logical way—the so-called coherence theories of truth.”</p>

<p>. . . “numerous authors have written about how the coherence doctrine, by linking itself with ecumenical notions such as tolerance and personal validation, obscures its uglier aspects. What has been obscured is how indiscriminate belief validation, with no check in external reality, creates a world that most of us would consider a nightmare. In this world, the witnesses and evidence in a jury trial are not sifted as to credibility because any piece of evidence put forward is equal to any other for the reason that all are valid by someone’s perspective.”</p></blockquote>

<p>And on the resulting clash between these two views:</p>

<blockquote><p>“. . . an extreme adherence to a correspondence theory of truth often necessitates the frustration of the strong human need for narrative coherence in explanation.”</p>

<p>“the explanatory frameworks that are generated by scientists working in the correspondence framework may not seem plausible to those who value coherence more, particularly the type of coherence that resonates with the narratives inherent in folk psychology.”</p></blockquote>

<h1 id="analytic-reductionism-vs-holism" id="analytic-reductionism-vs-holism">Analytic Reductionism vs Holism</h1>

<p>While Stanovich makes it clear that correspondence more clearly aligns with science than coherence, some of these other styles outlined rely on a healthy balance, such as here in analytic reductionism vs holism. Both views have their downfalls, and both are needed as a complement to the other.</p>

<p>Analytic reductionism means breaking things down into small parts that can be studied individually — this is the bread and butter of empirical research. Holism, on the other hand, means viewing small parts within their greater context and whole. I’ve spent <a href="https://schoolecosystem.wordpress.com/">a fair amount of time</a> arguing the need to view individual schools from a more holistic view, and I currently spent much of my time (in my day job) arguing about the need to view individual student data from a more holistic view, as well. But as Stanovich argues here, the translation of reading science to the classroom benefits from both approaches applied judiciously:</p>

<blockquote><p>“Gone are the days when such investigations were couched as if comparing a disembodied mind interacting with a disembodied orthography. Investigators in this area appreciate the necessity of adding the learning environment and instructional context as interacting factors in the model of orthography and achievement links that is being developed. This area displays an additive holism rather than the subtractive holism that has soured so many scientists on that end of the analytic and holistic continuum.”</p></blockquote>

<h1 id="probabilistic-prediction-vs-case-based-approach" id="probabilistic-prediction-vs-case-based-approach">Probabilistic Prediction vs Case-based Approach</h1>

<p>I found this distinction really useful as well, and very much related to the correspondence vs coherence dynamic. Researchers who conduct empirical studies will be more accustomed to probabilistic thinking, but for most of us laypeople, we are compelled by case studies.</p>

<blockquote><p>“We in the behavioral science of the study of reading are so accustomed to probabilistic explanation and prediction that we are prone to forget how alien it seems to the layperson, to teachers, and indeed to some working in our own field.”</p></blockquote>

<p>Stanovich makes a useful distinction in terms of understanding reading research, which is to understand that probabilistic prediction applies to aggregate data and decision-making, and that outlier cases will always be found — which can lead to some educators writing off the implications of aggregate data.</p>

<p>That said, both approaches have their utility and need to be wielded strategically:</p>

<blockquote><p>“In many cases though, in the complicated field of reading—in its domains of application—it really is unclear whether we should be in probabilistic or
case-based mode”</p></blockquote>

<h1 id="robust-process-explanations-vs-actual-sequence-explanations" id="robust-process-explanations-vs-actual-sequence-explanations">Robust-Process Explanations vs Actual-Sequence Explanations</h1>

<p>This is another one in which both styles have an important function in translating reading science into practice. Robust-process explanations are based on theoretical principles which can apply across different examples, whereas actual-sequence explanations apply to specific cases that have happened.</p>

<blockquote><p>“Those of us studying the psychology of the reading process are often after robust-process explanations, whereas we often address audiences who are interested in and oriented toward actual-sequence explanations. Teachers often want to know how this particular child reached this level of school achievement or this level of reading difficulty, as the case may be.”</p></blockquote>

<p>Both of these are useful ways of getting at the truth.</p>

<blockquote><p>“By subdividing robust-process explanations we get closer to actual-sequence explanations, and by aggregating actual-sequence explanations we get closer to a robust-process explanation. The two work in concert.”</p></blockquote>

<h1 id="consilience-vs-uniqueness" id="consilience-vs-uniqueness">Consilience vs Uniqueness</h1>

<p>E.O. Wilson defines consilience as the “unification of knowledge by the linking of facts and fact-based theory across disciplines to create a common groundwork of explanation.” Such efforts are important in reading research, as robust theories draw upon different fields such as “connectionist modeling, cognitive neuroscience, and classroom studies of effective practice.”</p>

<p>Uniqueness, on the other hand, refers to a tendency to look for the flashy new thing that stands out and excites people.</p>

<blockquote><p>“This concern for consilience contrasts with the faddish tendencies in the field of education to search for magic bullets and miracle cures deriving from theories that do not cohere with the knowledge being developed by allied disciplines. The quest for a magic bullet always tempts education to stray from valuing consilience.”</p></blockquote>

<p>Like correspondence vs coherence, Stanovich positions consilience as far more firmly on the side of science.</p>

<h1 id="what-styles-are-scientific" id="what-styles-are-scientific">What styles are scientific?</h1>

<p><img src="https://i.snap.as/Kswjc7Vm.png" alt="scientific styles"/></p>

<p>Somewhat unscientifically, since none of these “styles” are based on quantitative research, Stanovich guesstimates how these different styles can be applied in varying ways and still be considered scientific. As you can see, the two styles that present that greatest challenge, in his view, to the application of reading science are the tendency of many in the education field to lean into coherence and uniqueness in their views of reading.</p>

<p>That said, Stanovich is not trying to enforce a rigid perspective of science here — he believes these other styles can work together, and he also warns against rigidity — which is a warning I think many who would consider themselves “Science of Reading” folks would do well to heed:</p>

<blockquote><p>“<strong>I do fear that some would prefer rigid rules and bullet points on a PowerPoint presentation rather than the story of science in its full complexity</strong>—including the complexities of certain styles that are neither right nor wrong but represent continua (better viewed as parameters that we are constantly adjusting so as to facilitate the process). Science is a delicate epistemological game.”</p>

<p>“Science’s real uniqueness comes from its self-correcting nature. Its unique epistemic power comes from a very un-Promethean characteristic: its constant fiddling with things—with theory, experimental setups, techniques, and its styles of the type I have discussed. And we are not afraid to readjust—which itself, recursively, is one of science’s characteristic features—we are not afraid to implicitly admit previous error when we make a readjustment.”</p>

<p>“<strong>I fear that this flexibility, this juggling, this self-corrective mindset will be lost if we too rigidly reduce science to a set of rules</strong>. Do not misunderstand me, though—I am wholeheartedly in favor of instructing teachers and other educational personnel in what science is and is not and what are the unique features that underlie its epistemic power. <strong>But these features should not become a prison</strong>.”</p></blockquote>

<p>There’s been a lot of recent “scholarly disputes” about reading research, and Stanovich raises a procedure that would be nice to see used more: “adversarial collaboration.” In this process, the disputants agree to an arbiter, who then works with the disputants to design experiments that can test the theories under dispute. This research is then published by the arbiter with the disputants as co-authors.</p>

<p>What a brilliant way to tackle some of the current debates about phonemic awareness!</p>

<blockquote><p>“In the middle of the heat of the internecine warfare that sometimes takes place in our field, it is important to be able to differentiate legitimate scientific criticism that derives from background assumptions from the opposite ends of the style dimensions I have discussed here—and criticisms deriving from critics who are not playing the game of science at all.”</p>

<p>“Willingness to accept offers of adversarial collaboration might be a tool to use in distinguishing who is playing science from who is not.</p></blockquote>

<p>This approach of adversarial collaboration has been <a href="https://www.biorxiv.org/content/10.1101/2023.06.23.546249v1">used recently</a> to advance the research into cognition. Why don&#39;t we do more of this in literacy research?</p>

<p><a href="https://languageandliteracy.blog/tag:Stanovich" class="hashtag"><span>#</span><span class="p-category">Stanovich</span></a> <a href="https://languageandliteracy.blog/tag:research" class="hashtag"><span>#</span><span class="p-category">research</span></a> <a href="https://languageandliteracy.blog/tag:empiricism" class="hashtag"><span>#</span><span class="p-category">empiricism</span></a> <a href="https://languageandliteracy.blog/tag:literacy" class="hashtag"><span>#</span><span class="p-category">literacy</span></a> <a href="https://languageandliteracy.blog/tag:reading" class="hashtag"><span>#</span><span class="p-category">reading</span></a> <a href="https://languageandliteracy.blog/tag:science" class="hashtag"><span>#</span><span class="p-category">science</span></a> <a href="https://languageandliteracy.blog/tag:scienceofreading" class="hashtag"><span>#</span><span class="p-category">scienceofreading</span></a> <a href="https://languageandliteracy.blog/tag:SOR" class="hashtag"><span>#</span><span class="p-category">SOR</span></a></p>

<p><a href="https://remark.as/p/languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of">Discuss...</a></p>
]]></content:encoded>
      <guid>https://languageandliteracy.blog/how-you-interpret-the-science-of-reading-depends-on-how-you-think-of</guid>
      <pubDate>Tue, 12 Jul 2022 12:20:27 +0000</pubDate>
    </item>
  </channel>
</rss>