<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>speech &amp;mdash; Language &amp; Literacy</title>
    <link>https://languageandliteracy.blog/tag:speech</link>
    <description>Musings about language and literacy and learning</description>
    <pubDate>Thu, 16 Apr 2026 22:59:27 +0000</pubDate>
    
    <item>
      <title>The Relation of Speech to Reading and Writing</title>
      <link>https://languageandliteracy.blog/the-relation-of-speech-to-reading-and-writing?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[OK, we’re here, at our third paper in our series examining the naturalness, or not, of gaining literacy.&#xA;&#xA;Liberman, A. M. (1992). Chapter 9 The Relation of Speech to Reading and Writing. In R. Frost &amp; L. Katz (Eds.), Advances in Psychology (Vol. 94, pp. 167–178). North-Holland. https://doi.org/10.1016/S0166-4115(08)62794-6&#xA;Liberman comes strong out the gate with seven claims on why speech is “more natural” than written language:&#xA;&#xA;!--more--&#xA;&#xA;Speech is universal. Many languages don’t even have a written form.&#xA;Speech has been around far longer than written language.&#xA;For each of us individually, speech develops far earlier than reading and writing (if we are fortunate to even develop reading and writing).&#xA;Speech does not need to be taught; it is pre-cognitive, like seeing and hearing. Literacy is rather an intellectual achievement.&#xA;    I paused on the first part of this claim. For students with a developmental language disorder, language does need to be taught more intentionally and supported more intensively. And for the type of language that is not just everyday social language—disciplinary, academic written language—such language also needs to be taught explicitly, most especially for multilingual learners, and its acquisition certainly represents an intellectual achievement!&#xA;Parts of our brain have evolved to be utilized specifically for language, while reading and writing must both exploit those innate aspects along repurposing other (originally) nonlinguistic parts. This is the “bootstrapping” notion many more current reading researchers speak to based on brain scans (Wolf, Dehaene, etc).&#xA;This one is kinda hard to summarize, but it’s basically centered around the idea that writing systems are both constrained by the oral language they are based on, and more variable. Scripts cannot be purely sound based, as speech is — instead, they are “pitched at the more abstract phonological and morphophonological levels” and this greater abstraction requires greater conscious awareness, at least initially, on the part of the learner.&#xA;“Speech is the product of biological evolution, while writing systems are artifacts” — “part discovery, part invention.” Here, Liberman echoes an important point also made by Gough and Hillinger:&#xA;&#xA;  “The discovery—surely one of the most momentous of all time—was that words do not differ from one another holistically, but rather by the particular arrangement of a small inventory of the meaningless units they comprise. The invention was simply the notion that if each of these units were to be represented by a distinctive optical shape, then everyone could read and write, provided he knew the language and was conscious of the internal phonological structure of its words.” [bold added]&#xA;&#xA;Here’s the similar quote from G&amp;H:&#xA;&#xA;  “Whether recognition of individual letters causes difficulty or not, the recognition that each ciphertext word is composed of a sequence of meaningless elements must be hard for the child to achieve. The requirement that he note the same fact about the plaintext, that he recognize that each spoken word is composed of a sequence of meaningless elements, may be even more unnatural.” [bold added]&#xA;&#xA;  Gough, P. B., &amp; Hillinger, M. L. (1980). Learning to read: An unnatural act. Bulletin of the Orton Society, 30, 179–196. https://doi.org/10.1007/BF02653717&#xA;&#xA;This point, made both by G&amp;H and Liberman, is worth pausing on and amplifying in more depth, because it’s not only a key point of departure from the argument of the Goodmans, but furthermore a key point that underlays debates about phonics even today. For the Goodmans, as with many phonics critics since, the point of reading instruction should be that it is facilitated by learning focused on meaning. According to the Goodmans, when a teacher explicitly and sequentially teaches the meaningless, artificial components of phonemes and graphemes, they create a barrier to natural learning:&#xA;&#xA;  “With the focus on learning, the teacher must understand and deal with language and language learning. . . . The learners keep their minds on meaning. . . The crucial relationships of language with meaning and with the context that makes language meaningful is also vital. . . .We must focus more and more attention on how written language is used in society because it is through the relevant use of language that children will learn it. They will learn it because it will have meaning and purpose to them.&#xA;&#xA;  With the focus on teaching both teachers and learners are dealing with language often in abstract bits and pieces. . . . it’s a serious mistake to create curricula based on artificial skill sequences and hierarchies derived from such studies.&#xA;&#xA;  Our research has convinced us that the skills displayed by the proficient reader derive from the meaningful use of written language and that sequential instruction in those skills is as pointless and fruitless as instruction in the skills of a proficient listener would be to teach infants to comprehend speech.”&#xA;&#xA;  Goodman, K. S., &amp; Goodman, Y. M. (1976). Learning to Read is Natural. https://eric.ed.gov/?id=ED155621&#xA;&#xA;For the Goodmans and many proponents of balanced literacy today, a focus on meaningless, unnatural components is an impediment to the naturally motivated learning of children. And hey, they aren’t wrong — learning these abstract aspects of oral and written language is a barrier to all too many.&#xA;&#xA;But we should be absolutely clear that Gough, Hillinger, Liberman, and many researchers focused on literacy fully acknowledge that these components are difficult and a tremendous potential barrier to learning—in fact, they fully agree with the Goodmans that learning sublexical units (phonemes and their haphazard letter sequences) is unnatural! The key difference is that they also argue that these artificial units are essential to reading and therefore must be tackled head on and overcome by children in order for reading to truly be successful.&#xA;&#xA;But I’m taking us away from Liberman, and he’s only getting started. He takes some time to outline what he calls “the conventional view of speech.” According to Liberman, this is a view that assumes that speech is governed by general motor and perceptual systems, rather than ones specialized for language. This means that the processing of speech must therefore be cognitive in nature, as it requires translation–similar to learning the written form, it requires attaching a phonetic label to the sounds of what is heard. In this sense, then, learning language can be perceived as biologically secondary.&#xA;&#xA;The reason Liberman takes time laying this out is because if we are to take this view seriously, it means we must see written language as “equally natural” to speech because it is essentially a similar process of coding that requires cognition, with the only difference being one of mode.&#xA;&#xA;This is pretty much exactly what the Goodmans argued: “. . . if written language can perform the functions of language it must be language.”&#xA;&#xA;The other big issue with the “conventional” view, according to Liberman, is that it means “the elements of a writing system can only be defined as optical shapes. . . and] makes it hard to avoid the assumption that the trouble with the dyslexic must be in the visual system.” This mistaken assumption is indeed a continuing confusion for many about learning to read, as witnessed by some who attempt to teach kids to read by [noticing the shapes of words (a quick aside for some nuance: some with dyslexia may have visual-spatial issues, which may become more apparent when learning non-alphabetic written languages, such as Chinese).&#xA;&#xA;word shapes&#xA;&#xA;Here Liberman makes a key distinction: the evolution of oral language is biological, while written language is cultural. I find myself both deeply compelled by this claim, as it is useful, and also a little resistant. I resist because language is also clearly cultural. But I get that the point here is that the mechanism for learning language is baked into our brains, developing rapidly even as we are in the womb, while acquiring literacy is more dependent on cultural transmission and a significant amount of work.&#xA;&#xA;  “In the development of writing systems, the answer is simple and beyond dispute: parity was established by agreement. Thus, all who use an alphabet are parties to a compact that prescribes just which optical shapes are to be taken as symbols for which phonological units, the association of the one with the other having been determined arbitrarily. Indeed, this is what it means to say that writing systems are artifacts, and that the child’s learning the linguistic significance of the characters of the script is a cognitive activity.” [bold added]&#xA;&#xA;This leads Liberman to propose what he calls the “unconventional view of speech.” I’m going to do some heavy paraphrasing here, but if you’re into speech pathology or like to geek out about the articulatory dimensions of speech, you may find this section of the paper interesting, as he lays out why “co-articulation” is a fundamental aspect of speech. Essentially, he lays out some principles that allows for the claim that “There is no need . . . for a cognitive translation from an initial auditory representation, simply because there is no initial auditory representation,” meaning that speech is processed rapidly and naturally.&#xA;&#xA;And now Liberman turns to the Goodmans directly to take their full argument head on, so it’s worth reproducing this section in full:&#xA;&#xA;HOW CAN READING/WRITING BE MADE TO EXPLOIT THE MORE&#xA;NATURAL PROCESSES OF SPEECH?&#xA;&#xA;  “The conventional view of speech provides no basis for asking this question, since there exists, on this view, no difference in naturalness. It is perhaps for this reason that the (probably) most widely held theory of reading in the United States explicitly takes as its premise that reading and writing are, or at least can be, as natural and easy as speech (Goodman &amp; Goodman, 1979). According to this theory, called ‘whole language,’ reading and writing prove to be difficult only because teachers burden children with what the theorists call bite-size abstract chunks of language such as words, syllables, and phonemes’ (Goodman, 1986). If teachers were to teach children to read and write the way they were (presumably) taught to speak, then there would be no problem.&#xA;&#xA;But if we adopt the “unconventional view ” of speech, then we don’t view spoken and written language, one auditory and the other visual, as equivalents. Instead, this view allows us to see that speech is processed completely differently, and much more swiftly, and we don’t need to become aware of nor think of the sub units of sounds within a word: “there is nothing in the ordinary use of language that requires the speaker/listener to put his attention on them.”&#xA;&#xA;  “The consequence is that experience with speech is normally not sufficient to make one consciously aware of the phonological structure of its words, yet it is exactly this awareness that is required of all who would enjoy the advantages of an alphabetic scheme for reading and writing.”&#xA;&#xA;And the specialized properties of speech, such as co-articulation, which allow us to wield and process them so efficiently, actually present us with a greater barrier in conversion to written language. Co-articulation, which is when we merge sounds together in the speech stream, “has the disadvantage from the would-be reader/writer’s point of view that it destroys any simple correspondence between the acoustic segments and phonological segments they convey.”&#xA;&#xA;Thus and therefore, learning to read and write requires cognitive work, at least initially, that is not required for spoken language (Note that though I’ve taken this paper at face value with the word speech and we’re focused on those aspects specific to spoken language, many of these characteristics can apply just as readily to sign language).&#xA;&#xA;Whew! This paper was a bit harder to unpack than the others, but I think it’s a very good capstone to our investigation in the series. So are we convinced that learning to read, at least initially, is unnatural?&#xA;&#xA;I’ll pursue some final thoughts to wrap up some loose ends in the next post.&#xA;&#xA;#natural #unnatural #reading #spokenlanguage #writtenlanguage #language #literacy #speech #meaning #Liberman&#xA;&#xA;a href=&#34;https://remark.as/p/languageandliteracy.blog/the-relation-of-speech-to-reading-and-writing&#34;Discuss.../a]]&gt;</description>
      <content:encoded><![CDATA[<p>OK, we’re here, at our third paper <a href="https://write.as/manderson/what-is-un-natural-about-learning-to-read-and-write">in our series</a> examining the naturalness, or not, of gaining literacy.</p>
<ul><li>Liberman, A. M. (1992). Chapter 9 The Relation of Speech to Reading and Writing. In R. Frost &amp; L. Katz (Eds.), Advances in Psychology (Vol. 94, pp. 167–178). North-Holland. <a href="https://doi.org/10.1016/S0166-4115(08)62794-6">https://doi.org/10.1016/S0166-4115(08)62794-6</a>
Liberman comes strong out the gate with seven claims on why speech* is “more natural” than written language:</li></ul>


<ol><li>Speech is universal. Many languages don’t even have a written form.</li>
<li>Speech has been around <a href="https://schoolecosystem.wordpress.com/2019/02/09/close-reading-the-context-of-an-exegesis/">far longer</a> than written language.</li>
<li>For each of us individually, speech develops far earlier than reading and writing (if we are fortunate to even develop reading and writing).</li>
<li>Speech does not need to be taught; it is <em>pre-cognitive</em>, like seeing and hearing. Literacy is rather an <em>intellectual</em> achievement.
<ul><li>I paused on the first part of this claim. For students with a <a href="https://kids.frontiersin.org/articles/10.3389/frym.2019.00094">developmental language disorder</a>, language does need to be taught more intentionally and supported more intensively. And for the type of language that is not just everyday social language—disciplinary, <a href="https://languageforlearning.gse.harvard.edu/core-academic-language">academic written language</a>—such language also needs to be taught explicitly, most especially for multilingual learners, and its acquisition certainly represents an intellectual achievement!</li></ul></li>
<li>Parts of our brain have evolved to be utilized specifically for language, while reading and writing must both exploit those innate aspects along repurposing other (originally) nonlinguistic parts. This is the “bootstrapping” notion many more current reading researchers speak to based on brain scans (Wolf, Dehaene, etc).</li>
<li>This one is kinda hard to summarize, but it’s basically centered around the idea that writing systems are both constrained by the oral language they are based on, and more variable. Scripts cannot be purely sound based, as speech is — instead, they are “pitched at the more abstract phonological and morphophonological levels” and this greater abstraction requires greater conscious awareness, at least initially, on the part of the learner.</li>
<li>“Speech is the product of biological evolution, while writing systems are artifacts” — “part discovery, part invention.” Here, Liberman echoes an important point also made by Gough and Hillinger:</li></ol>

<blockquote><p>“The discovery—surely one of the most momentous of all time—was that words do not differ from one another holistically, but rather by the particular arrangement of a small inventory of the <strong>meaningless units</strong> they comprise. The invention was simply the notion that if each of these units were to be represented by a distinctive optical shape, then everyone could read and write, provided he knew the language and was <strong>conscious of the internal phonological structure of its words</strong>.” [bold added]</p></blockquote>

<p>Here’s the similar quote from G&amp;H:</p>

<blockquote><p>“Whether recognition of individual letters causes difficulty or not, the recognition that each ciphertext word is composed of a <strong>sequence of meaningless elements</strong> must be hard for the child to achieve. The requirement that he note the same fact about the plaintext, that he recognize that <strong>each spoken word is composed of a sequence of meaningless elements</strong>, may be even more unnatural.” [bold added]</p>

<p>Gough, P. B., &amp; Hillinger, M. L. (1980). Learning to read: An unnatural act. Bulletin of the Orton Society, 30, 179–196. <a href="https://doi.org/10.1007/BF02653717">https://doi.org/10.1007/BF02653717</a></p></blockquote>

<p>This point, made both by G&amp;H and Liberman, is worth pausing on and amplifying in more depth, because it’s not only a key point of departure from the argument of the Goodmans, but furthermore a key point that underlays <a href="https://theconversation.com/phonics-teaching-in-england-needs-to-change-our-new-research-points-to-a-better-approach-172655">debates about phonics</a> even today. For the Goodmans, as with many phonics critics since, the point of reading instruction should be that it is facilitated by learning focused on <em>meaning</em>. According to the Goodmans, when a teacher explicitly and sequentially teaches the <em>meaningless</em>, <em>artificial</em> components of phonemes and graphemes, they create a barrier to natural learning:</p>

<blockquote><p>“With the focus on learning, the teacher must understand and deal with language and language learning. . . . The learners keep their minds on <strong>meaning</strong>. . . The crucial relationships of language with <strong>meaning</strong> and with the context that makes language <strong>meaningful</strong> is also vital. . . .We must focus more and more attention on how written language is used in society because it is through the <strong>relevant</strong> use of language that children will learn it. They will learn it because it will have <strong>meaning</strong> and <strong>purpose</strong> to them.</p>

<p>With the focus on teaching both teachers and learners are dealing with language often in <strong>abstract bits and pieces</strong>. . . . it’s a serious mistake to create curricula based on <strong>artificial skill sequences and hierarchies</strong> derived from such studies.</p>

<p>Our research has convinced us that the skills displayed by the proficient reader derive from the <strong>meaningful</strong> use of written language and that <strong>sequential instruction in those skills is as pointless and fruitless</strong> as instruction in the skills of a proficient listener would be to teach infants to comprehend speech.”</p>

<p>Goodman, K. S., &amp; Goodman, Y. M. (1976). Learning to Read is Natural. <a href="https://eric.ed.gov/?id=ED155621">https://eric.ed.gov/?id=ED155621</a></p></blockquote>

<p>For the Goodmans and many proponents of balanced literacy today, a focus on meaningless, unnatural components is an impediment to the naturally motivated learning of children. And hey, they aren’t wrong — learning these abstract aspects of oral and written language is a barrier to all too many.</p>

<p>But we should be absolutely clear that Gough, Hillinger, Liberman, and many researchers focused on literacy fully acknowledge that these components are difficult and a tremendous potential barrier to learning—in fact, they fully agree with the Goodmans that learning sublexical units (phonemes and their haphazard letter sequences) is unnatural! The key difference is that they also argue that these artificial units are <strong>essential</strong> to reading and therefore must be tackled head on and overcome by children in order for reading to truly be successful.</p>

<p>But I’m taking us away from Liberman, and he’s only getting started. He takes some time to outline what he calls “the conventional view of speech.” According to Liberman, this is a view that assumes that speech is governed by general motor and perceptual systems, rather than ones specialized for language. This means that the processing of speech must therefore be cognitive in nature, as it requires translation–similar to learning the written form, it requires attaching a phonetic label to the sounds of what is heard. In this sense, then, learning language can be perceived as biologically secondary.</p>

<p>The reason Liberman takes time laying this out is because if we are to take this view seriously, it means we must see written language as “equally natural” to speech because it is essentially a similar process of coding that requires cognition, with the only difference being one of mode.</p>

<p>This is pretty much exactly what the Goodmans argued: “. . . if written language can perform the functions of language it must be language.”</p>

<p>The other big issue with the “conventional” view, according to Liberman, is that it means “the elements of a writing system can only be defined as optical shapes. . . [and] makes it hard to avoid the assumption that the trouble with the dyslexic must be in the visual system.” This mistaken assumption is indeed a continuing confusion for many about learning to read, as witnessed by some who attempt to teach kids to read by <a href="https://www.readingbyphonics.com/about-phonics/reading-with-word-shapes.html">noticing the shapes of words</a> (a quick aside for some nuance: some with dyslexia may have <a href="https://dyslexiaida.org/what-is-the-role-of-the-visual-system-in-reading-and-dyslexia/">visual-spatial issues</a>, which may become more apparent when learning non-alphabetic written languages, <a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2020.00462/full">such as Chinese</a>).</p>

<p><img src="https://i.snap.as/I5OTt3fF.png" alt="word shapes"/></p>

<p>Here Liberman makes a key distinction: the evolution of oral language is biological, while written language is cultural. I find myself both deeply compelled by this claim, as it is useful, and also a little resistant. I resist because language is also clearly cultural. But I get that the point here is that the mechanism for learning language is baked into our brains, developing rapidly even as we are in the womb, while acquiring literacy is more dependent on cultural transmission and a significant amount of work.</p>

<blockquote><p>“In the development of writing systems, the answer is simple and beyond dispute: parity was established by agreement. Thus, all who use an alphabet are parties to a compact that prescribes just which optical shapes are to be taken as symbols for which phonological units, the association of the one with the other having been determined *<em>arbitrarily</em>. Indeed, this is what it means to say that writing systems are <strong>artifacts</strong>, and that the child’s learning the linguistic significance of the characters of the script is a <strong>cognitive</strong> activity.” [bold added]</p></blockquote>

<p>This leads Liberman to propose what he calls the “unconventional view of speech.” I’m going to do some heavy paraphrasing here, but if you’re into speech pathology or like to geek out about the articulatory dimensions of speech, you may find this section of the paper interesting, as he lays out why “co-articulation” is a fundamental aspect of speech. Essentially, he lays out some principles that allows for the claim that “There is no need . . . for a cognitive translation from an initial auditory representation, simply because there is no initial auditory representation,” meaning that speech is processed rapidly and naturally.</p>

<p>And now Liberman turns to the Goodmans directly to take their full argument head on, so it’s worth reproducing this section in full:</p>

<h2 id="how-can-reading-writing-be-made-to-exploit-the-more" id="how-can-reading-writing-be-made-to-exploit-the-more">HOW CAN READING/WRITING BE MADE TO EXPLOIT THE MORE</h2>

<p>NATURAL PROCESSES OF SPEECH?</p>

<blockquote><p>“The conventional view of speech provides no basis for asking this question, since there exists, on this view, no difference in naturalness. It is perhaps for this reason that the (probably) most widely held theory of reading in the United States explicitly takes as its premise that reading and writing are, or at least can be, as natural and easy as speech (Goodman &amp; Goodman, 1979). According to this theory, called ‘whole language,’ reading and writing prove to be difficult only because teachers burden children with what the theorists call bite-size abstract chunks of language such as words, syllables, and phonemes’ (Goodman, 1986). If teachers were to teach children to read and write the way they were (presumably) taught to speak, then there would be no problem.</p></blockquote>

<p>But if we adopt the “unconventional view ” of speech, then we don’t view spoken and written language, one auditory and the other visual, as equivalents. Instead, this view allows us to see that speech is processed completely differently, and much more swiftly, and we don’t need to become aware of nor think of the sub units of sounds within a word: “there is nothing in the ordinary use of language that requires the speaker/listener to put his attention on them.”</p>

<blockquote><p>“The consequence is that experience with speech is normally not sufficient to make one consciously aware of the phonological structure of its words, yet it is exactly this awareness that is required of all who would enjoy the advantages of an alphabetic scheme for reading and writing.”</p></blockquote>

<p>And the specialized properties of speech, such as co-articulation, which allow us to wield and process them so efficiently, actually present us with a greater barrier in conversion to written language. Co-articulation, which is when we merge sounds together in the speech stream, “has the disadvantage from the would-be reader/writer’s point of view that it destroys any simple correspondence between the acoustic segments and phonological segments they convey.”</p>

<p>Thus and therefore, learning to read and write requires cognitive work, at least initially, that is not required for spoken language (<strong>Note that though I’ve taken this paper at face value with the word speech and we’re focused on those aspects specific to spoken language, many of these characteristics can apply just as readily to sign language</strong>).</p>

<p>Whew! This paper was a bit harder to unpack than the others, but I think it’s a very good capstone to our investigation in the series. So are we convinced that learning to read, at least initially, is unnatural?</p>

<p>I’ll pursue some final thoughts to wrap up some loose ends <a href="https://write.as/manderson/a-finale-learning-to-read-and-write-is-a-remarkable-human-feat">in the next post</a>.</p>

<p><a href="https://languageandliteracy.blog/tag:natural" class="hashtag"><span>#</span><span class="p-category">natural</span></a> <a href="https://languageandliteracy.blog/tag:unnatural" class="hashtag"><span>#</span><span class="p-category">unnatural</span></a> <a href="https://languageandliteracy.blog/tag:reading" class="hashtag"><span>#</span><span class="p-category">reading</span></a> <a href="https://languageandliteracy.blog/tag:spokenlanguage" class="hashtag"><span>#</span><span class="p-category">spokenlanguage</span></a> <a href="https://languageandliteracy.blog/tag:writtenlanguage" class="hashtag"><span>#</span><span class="p-category">writtenlanguage</span></a> <a href="https://languageandliteracy.blog/tag:language" class="hashtag"><span>#</span><span class="p-category">language</span></a> <a href="https://languageandliteracy.blog/tag:literacy" class="hashtag"><span>#</span><span class="p-category">literacy</span></a> <a href="https://languageandliteracy.blog/tag:speech" class="hashtag"><span>#</span><span class="p-category">speech</span></a> <a href="https://languageandliteracy.blog/tag:meaning" class="hashtag"><span>#</span><span class="p-category">meaning</span></a> <a href="https://languageandliteracy.blog/tag:Liberman" class="hashtag"><span>#</span><span class="p-category">Liberman</span></a></p>

<p><a href="https://remark.as/p/languageandliteracy.blog/the-relation-of-speech-to-reading-and-writing">Discuss...</a></p>
]]></content:encoded>
      <guid>https://languageandliteracy.blog/the-relation-of-speech-to-reading-and-writing</guid>
      <pubDate>Mon, 24 Jan 2022 02:15:40 +0000</pubDate>
    </item>
    <item>
      <title>Universals of Language</title>
      <link>https://languageandliteracy.blog/universals-of-language?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[In my last post, we looked at a wonderful paper, “Universals in Learning to Read Across Languages and Writing Systems“, that outlines operating principles of reading and writing across languages, as well as some key variations. Continuing on this theme, I wanted to highlight another recent paper, “The universal language network: A cross-linguistic investigation spanning 45 languages and 11 language families.”&#xA;&#xA;The project is cool — the researchers have started a cross-linguistic database of brain scans, and their initial findings demonstrate a strong universal neural basis for language across multiple languages. Here’s the key finding that stood out to me:&#xA;&#xA;  In summary, we have here established that several key properties of the neural architecture of language—including its topography, lateralization to the left hemisphere, strong within network functional integration, and selectivity for linguistic processing—hold across speakers of diverse languages spanning 11 language families; and the variability we observed across languages is lower than the inter-individual variability. The language brain network therefore appears well-suited to support the broadly common features of languages, shaped by biological and cultural evolution.&#xA;  (Ayyash et al., 2021)&#xA;&#xA;I found out about this paper from this Twitter thread from one of the researchers, Ev Fedorenko, and her thread also provides a neat summary of the project.&#xA;&#xA;As this database of brain scans across languages is built out, it will be interesting to see what specific variations between languages and neural architecture may arise. For example, another recent paper, “Difference Between Children and Adults in the Print-speech Coactivated Network,” examined the brain scans of native Chinese speakers and found some variations from past studies in the brains of developing readers, most likely due to the difference in writing systems in terms of the lack of grapheme-phoneme correspondence for Chinese characters, as well as how a single pronunciation can have many different meanings represented by different visual characters.&#xA;&#xA;  Taken together, our findings indicate that print-speech convergence is generally language-universal in adults, but it shows some language-specific features in developing readers.&#xA;  (He et al., 2021)&#xA;&#xA;Overall, it’s fascinating to see how current research converges on the significant universality across languages in terms of how literacy develops, and exciting to see that specific differences between languages and writing systems are beginning to be studied with greater specificity.&#xA;&#xA;As Perfetti and Verhoeven tidily pointed out in their paper:&#xA;&#xA;  The story of learning to read thus is one of universals and particulars: (i) Universals, because writing maps onto language, no matter the details of the system, creating a common challenge in learning that mapping, and because experience leads to familiarity-based identification across languages. (ii) Particulars, because it does matter for learning how different levels of language – morphemes, syllables, phonemes – are engaged; this in turn depends on the structure of the language and how its written form accommodates this structure.&#xA;  (Verhoeven &amp; Perfetti, 2021)&#xA;&#xA;#speech #language #literacy #universal #reading #multilingualism #orthography #brain #neuroscience #research&#xA;&#xA;a href=&#34;https://remark.as/p/languageandliteracy.blog/universals-of-language&#34;Discuss.../a]]&gt;</description>
      <content:encoded><![CDATA[<p>In my <a href="https://languageandliteracy.blog/operating-principles-across-written-languages">last post</a>, we looked at a wonderful paper, <a href="https://www.tandfonline.com/doi/full/10.1080/10888438.2021.1938575">“<em>Universals in Learning to Read Across Languages and Writing Systems</em>“</a>, that outlines operating principles of reading and writing across languages, as well as some key variations. Continuing on this theme, I wanted to highlight another recent paper, <a href="https://www.biorxiv.org/content/10.1101/2021.07.28.454040v1">“<em>The universal language network: A cross-linguistic investigation spanning 45 languages and 11 language families</em>.”</a></p>

<p>The project is cool — the researchers have started a cross-linguistic database of brain scans, and their initial findings demonstrate a strong universal neural basis for language across multiple languages. Here’s the key finding that stood out to me:</p>

<blockquote><p>In summary, we have here established that several key properties of the neural architecture of language—including its topography, lateralization to the left hemisphere, strong within network functional integration, and selectivity for linguistic processing—hold across speakers of diverse languages spanning 11 language families; and the variability we observed across languages is lower than the inter-individual variability. The language brain network therefore appears well-suited to support the broadly common features of languages, shaped by biological and cultural evolution.
(Ayyash et al., 2021)</p></blockquote>

<p>I found out about this paper from <a href="https://twitter.com/ev_fedorenko/status/1420650532998369282?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1420650532998369282%7Ctwgr%5Ebfaedfa128468bd53d4f802c4c5b0203c7e8127d%7Ctwcon%5Es1_c10&amp;ref_url=https%3A%2F%2Flanguageliteracydotblog.wordpress.com%2F2021%2F08%2F27%2Funiversals-of-language%2F">this Twitter thread</a> from one of the researchers, Ev Fedorenko, and her thread also provides a neat summary of the project.</p>

<p>As this database of brain scans across languages is built out, it will be interesting to see what specific variations between languages and neural architecture may arise. For example, another recent paper, <a href="https://www.tandfonline.com/doi/full/10.1080/10888438.2021.1965607?src=">“<em>Difference Between Children and Adults in the Print-speech Coactivated Network</em>,”</a> examined the brain scans of native Chinese speakers and found some variations from past studies in the brains of developing readers, most likely due to the difference in writing systems in terms of the lack of grapheme-phoneme correspondence for Chinese characters, as well as how a single pronunciation can have many different meanings represented by different visual characters.</p>

<blockquote><p>Taken together, our findings indicate that print-speech convergence is generally language-universal in adults, but it shows some language-specific features in developing readers.
(He et al., 2021)</p></blockquote>

<p>Overall, it’s fascinating to see how current research converges on the significant universality across languages in terms of how literacy develops, and exciting to see that specific differences between languages and writing systems are beginning to be studied with greater specificity.</p>

<p>As Perfetti and Verhoeven tidily pointed out in their paper:</p>

<blockquote><p>The story of learning to read thus is one of universals and particulars: (i) Universals, because writing maps onto language, no matter the details of the system, creating a common challenge in learning that mapping, and because experience leads to familiarity-based identification across languages. (ii) <strong>Particulars, because it does matter for learning how different levels of language – morphemes, syllables, phonemes – are engaged; this in turn depends on the structure of the language and how its written form accommodates this structure.</strong>
(Verhoeven &amp; Perfetti, 2021)</p></blockquote>

<p><a href="https://languageandliteracy.blog/tag:speech" class="hashtag"><span>#</span><span class="p-category">speech</span></a> <a href="https://languageandliteracy.blog/tag:language" class="hashtag"><span>#</span><span class="p-category">language</span></a> <a href="https://languageandliteracy.blog/tag:literacy" class="hashtag"><span>#</span><span class="p-category">literacy</span></a> <a href="https://languageandliteracy.blog/tag:universal" class="hashtag"><span>#</span><span class="p-category">universal</span></a> <a href="https://languageandliteracy.blog/tag:reading" class="hashtag"><span>#</span><span class="p-category">reading</span></a> <a href="https://languageandliteracy.blog/tag:multilingualism" class="hashtag"><span>#</span><span class="p-category">multilingualism</span></a> <a href="https://languageandliteracy.blog/tag:orthography" class="hashtag"><span>#</span><span class="p-category">orthography</span></a> <a href="https://languageandliteracy.blog/tag:brain" class="hashtag"><span>#</span><span class="p-category">brain</span></a> <a href="https://languageandliteracy.blog/tag:neuroscience" class="hashtag"><span>#</span><span class="p-category">neuroscience</span></a> <a href="https://languageandliteracy.blog/tag:research" class="hashtag"><span>#</span><span class="p-category">research</span></a></p>

<p><a href="https://remark.as/p/languageandliteracy.blog/universals-of-language">Discuss...</a></p>
]]></content:encoded>
      <guid>https://languageandliteracy.blog/universals-of-language</guid>
      <pubDate>Fri, 27 Aug 2021 22:27:39 +0000</pubDate>
    </item>
    <item>
      <title>Whole to Part to Whole</title>
      <link>https://languageandliteracy.blog/whole-to-part-to-whole?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[NOTE: Since writing this post, I have revised my thinking. You can see my updated thinking here.&#xA;&#xA;Oral language is baked into our brains. We are born to learn to speak.&#xA;&#xA;Similarly, reading our visual surroundings is second nature. Our eyes are neurally attuned to pick out fine-grained distinctions and patterns amidst the noise.&#xA;&#xA;But written language is something we graft onto our existing circuitry. Graphemes get bootstrapped onto our auditory and visual processing neural networks. We need repeated exposure to letters and words and sentences in print to finetune the fluent mapping of letter sequences and syntactical constructions into comprehension. And if our brain’s existing pathways are resistant to these changes—because our prior experiences with oral language do not well align to the written language (we speak a dialect that diverges more in sound from the spelling, or we haven’t had much exposure to the type of vocabulary and syntax more frequently encountered in written language)—than we may need additional explicit instruction and practice to take us to the point that decoding is fluid and effortless.&#xA;&#xA;But unfortunately, children who may need that extra bit of clear and structured practice often do not receive it. Instead, they are allowed to skip over words they can’t read, and passed onto the next grade.&#xA;&#xA;How can we pave the pathway to proficient reading for all our children?&#xA;&#xA;What We Can Hear Is What We Can Read&#xA;&#xA;There is a reciprocal process between learning letter-sounds and reading letter sequences within words.&#xA;&#xA;As we learn more graphemes, we refine our phonemic awareness, and as we refine our phonemic awareness, we further develop our ability to recognize words in print.&#xA;&#xA;Yet whether we should directly and explicitly practice and teach phonemic awareness itself (apart from phonics) is an area of contention amongst reading specialists, it seems. Furthermore, whether we should teach larger units of letter patterns within words (sometimes called ‘word families’ or ‘rime units’ or ‘phonograms’), is another area of contention, which you can see most explicitly in debates about synthetic vs. analytic phonics. There’s also arguments about when to introduce deeper aspects of word study, such as etymology and morphology (some Structured Word Inquiry proponents claim it should start from the very beginning). And an even further area of debate is whether we should teach phonemic awareness to proficiency beyond blending and segmenting to the advanced levels of deletion and substitution of phonemes.&#xA;&#xA;Since beginning my journey into reading research, I’d come across these debates, and dug quite a bit further into more research and still feel conflicted. From a research perspective, the weight does seem to land primarily on the side of teaching the key aspects of phonemic awareness first and foremost, and not bothering with other phonological skills like onset-rime or advanced phonemic awareness activities (see the last issue of The Reading League Journal and the latest findings on PA for more).&#xA;&#xA;And yet I still resist hardline rigidity against phonological awareness instruction and onset-rime practice. I believe these practices have their place. I should preface this by saying that I’m open to further critique and research that will challenge my suppositions.&#xA;&#xA;Here’s my argument:&#xA;&#xA;What we know about “the reading brain” is that reading is unnatural, and that as I outlined in the narrative at the start of this piece, we are essentially bootstrapping reading onto existing visual and aural brain architecture. For some kids, this process occurs smoothly and implicitly, but for many other students, it doesn’t, and they require not only more practice, but more explicit instruction and practice.&#xA;&#xA;A fluent reader can move almost instantaneously between letter sequences and larger chunks of words (smaller and larger “grain sizes”), depending on the context of the sentence. For students that do not have such fluency, their cognitive energy is taxed by disentangling the sounds and meaning for each word.&#xA;&#xA;Furthermore, for students who are learning English as a new language alongside of learning to read, or for students who speak an English dialect that has greater differences from the written form of English, their brains are doing additional work. For such students, it seems to me that providing more opportunities to gain fluency and move from phonemes to larger grain sizes and back would support the formation of their written English brain. For example, consider a second grade student who speaks Spanish as his first language who just arrived in the U.S. and is learning to both read and speak in English. Spanish is a primarily syllabic language, and phonemes map more directly onto spellings. Providing this student with more opportunities to practice hearing, speaking, and mapping phonemes, onsets, rimes, and morphemes into written words will support his reading development and his language development.&#xA;&#xA;So I argue that the progression and practice of our word-level instruction should move recursively from a hearing a word as a whole, to hearing and seeing its chunks (by “chunks” I mean rime units and roots/affixes), to seeing and hearing its individual letters and sounds, to seeing its chunks, to seeing the word as a whole. Through this recursive movement, we can support the neural connections that need to form in the fluent reading brain.&#xA;&#xA;Honestly, I find the rigidity of some against phonological awareness instruction and onset-rime unit practice misplaced. We’re not talking significant instructional time here. A systematic program for phonological awareness, such as Heggerty, for example, is 10-15 minutes a day. That’s a small investment for a potentially huge payoff in prevention of later reading difficulty for the kids who need it the most.&#xA;&#xA;Since writing this, I have changed and revised my thinking about the teaching of phonemic awareness and of the practice of phonology that is not connected to letters. Read more here&#xA;&#xA;Graphic from Is It Ever Too Late to Teach an Older Struggling Reader? Using Diagnostic Assessment to Determine Appropriate Intervention by Carrie Thomas Beck&#xA;Graphic from Is It Ever Too Late to Teach an Older Struggling Reader? Using Diagnostic Assessment to Determine Appropriate Intervention by Carrie Thomas Beck&#xA;&#xA;On the trajectory of beginning reading skills, onset-rime practice may possibly provide an onramp, though this is contested and some (I think convincingly) argue that focusing on phonemic awareness first and foremost is better bang for the buck. But after phonics instruction has begun and students have acquired their letter sounds to proficiency and are learning the various generalizations and irregularities of the English language in print, I believe that rime units have a critical role to play, along with beginning inflectional morphology like the plural ‘s’, past tense, ‘ed’, etc.&#xA;&#xA;Why is this? It’s because as readers develop fluency in decoding unknown words, they also began to develop greater efficiency in moving between smaller and larger grain sizes within words. For example, a 3rd grade reader encountering a new multisyllabic word in an informational text, such as “additional,” will slow themselves down and pay attention to the word parts, using their knowledge of syllabication and morphemes and word families as needed to break it up and recognize its sounds and meaning.&#xA;&#xA;So gaining proficiency in advanced phonemic awareness alongside onset-rime and morphological awareness can potentially boost those students who are showing up in 2nd, 3rd, and 4th grades as struggling readers, even if they have received systematic phonics instruction K-1.&#xA;&#xA;Here’s a few pieces of research aligning with my claims:&#xA;&#xA;Reading Acquisition, Developmental Dyslexia, and Skilled Reading Across Languages: A Psycholinguistic Grain Size Theory&#xA;Orthographic processing: A ‘mid-level’ vision of reading: The 44th Sir Frederic Bartlett Lecture&#xA;David Kilpatrick’s “phonemic proficiency hypothesis” (read pretty much anything by him to learn more about this, he has tons of lectures posted online as well. His Essentials book is essential reading indeed&#xA;&#xA;Don’t agree? Fire away! But one thing I want to stress is that you consider the student populations that have been assessed or worked with in your experience or research. Are they historically marginalized and underserved populations? Are they learning English as a new language? Are they struggling with a learning disability? I’m less interested in arguments that center students who typically benefit from the existing methods of instruction.&#xA;&#xA;Since writing this, I have changed and revised my thinking about the teaching of phonemic awareness and of the practice of phonology that is not connected to letters. Read more here&#xA;&#xA;#phonemicawareness #phonology #sounds #speech #reading #literacy #language #neuroscience #research]]&gt;</description>
      <content:encoded><![CDATA[<p><em>NOTE: Since writing this post, I have revised my thinking. You can see my updated thinking <a href="https://www.nomanis.com.au/blog/single-post/i-think-i-was-wrong-about-phonemic-awareness">here.</a></em></p>

<p>Oral language is baked into our brains. We are born to learn to speak.</p>

<p>Similarly, reading our visual surroundings is second nature. Our eyes are neurally attuned to pick out fine-grained distinctions and patterns amidst the noise.</p>

<p>But written language is something we graft onto our existing circuitry. Graphemes get bootstrapped onto our auditory and visual processing neural networks. We need repeated exposure to letters and words and sentences in print to finetune the fluent mapping of letter sequences and syntactical constructions into comprehension. And if our brain’s existing pathways are resistant to these changes—because our prior experiences with oral language do not well align to the written language (we speak a dialect that diverges more in sound from the spelling, or we haven’t had much exposure to the type of vocabulary and syntax more frequently encountered in written language)—than we may need additional explicit instruction and practice to take us to the point that decoding is fluid and effortless.</p>

<p>But unfortunately, children who may need that extra bit of clear and structured practice often do not receive it. Instead, they are allowed to skip over words they can’t read, and passed onto the next grade.</p>

<p>How can we pave the pathway to proficient reading for all our children?</p>

<h1 id="what-we-can-hear-is-what-we-can-read" id="what-we-can-hear-is-what-we-can-read">What We Can Hear Is What We Can Read</h1>

<p>There is a reciprocal process between learning letter-sounds and reading letter sequences within words.</p>

<p>As we learn more graphemes, we refine our phonemic awareness, and as we refine our phonemic awareness, we further develop our ability to recognize words in print.</p>

<p>Yet whether we should directly and explicitly practice and teach phonemic awareness itself (apart from phonics) is an area of contention amongst reading specialists, it seems. Furthermore, whether we should teach larger units of letter patterns within words (sometimes called ‘word families’ or ‘rime units’ or ‘phonograms’), is another area of contention, which you can see most explicitly in debates about synthetic vs. analytic phonics. There’s also arguments about when to introduce deeper aspects of word study, such as etymology and morphology (some Structured Word Inquiry proponents claim it should start from the very beginning). And an even further area of debate is whether we should teach phonemic awareness to proficiency beyond blending and segmenting to the advanced levels of deletion and substitution of phonemes.</p>

<p>Since beginning my journey into reading research, I’d come across these debates, and dug quite a bit further into more research and still feel conflicted. From a research perspective, the weight does seem to land primarily on the side of teaching the key aspects of phonemic awareness first and foremost, and not bothering with other phonological skills like onset-rime or advanced phonemic awareness activities (see the last issue of <a href="https://www.thereadingleague.org/wp-content/uploads/2020/10/TOC-Sept-Oct-2020.pdf">The Reading League Journal</a> and the latest findings on PA for more).</p>

<p>And yet I still resist hardline rigidity against phonological awareness instruction and onset-rime practice. I believe these practices have their place. I should preface this by saying that I’m open to further critique and research that will challenge my suppositions.</p>

<p>Here’s my argument:</p>

<p>What we know about “the reading brain” is that reading is unnatural, and that as I outlined in the narrative at the start of this piece, we are essentially bootstrapping reading onto existing visual and aural brain architecture. For some kids, this process occurs smoothly and implicitly, but for many other students, it doesn’t, and they require not only more practice, but more explicit instruction and practice.</p>

<p>A fluent reader can move almost instantaneously between letter sequences and larger chunks of words (smaller and larger “grain sizes”), depending on the context of the sentence. For students that do not have such fluency, their cognitive energy is taxed by disentangling the sounds and meaning for each word.</p>

<p>Furthermore, for students who are learning English as a new language alongside of learning to read, or for students who speak an English dialect that has greater differences from the written form of English, their brains are doing additional work. For such students, it seems to me that providing more opportunities to gain fluency and move from phonemes to larger grain sizes and back would support the formation of their written English brain. For example, consider a second grade student who speaks Spanish as his first language who just arrived in the U.S. and is learning to both read and speak in English. Spanish is a primarily syllabic language, and phonemes map more directly onto spellings. Providing this student with more opportunities to practice hearing, speaking, and mapping phonemes, onsets, rimes, and morphemes into written words will support his reading development and his language development.</p>

<p>So I argue that the progression and practice of our word-level instruction should move recursively from a hearing a word as a whole, to hearing and seeing its chunks (by “chunks” I mean rime units and roots/affixes), to seeing and hearing its individual letters and sounds, to seeing its chunks, to seeing the word as a whole. Through this recursive movement, we can support the neural connections that need to form in the fluent reading brain.</p>

<p>Honestly, I find the rigidity of some against phonological awareness instruction and onset-rime unit practice misplaced. We’re not talking significant instructional time here. A systematic program for phonological awareness, such as Heggerty, for example, is 10-15 minutes a day. That’s a small investment for a potentially huge payoff in prevention of later reading difficulty for the kids who need it the most.</p>

<p><em>Since writing this, I have changed and revised my thinking about the teaching of phonemic awareness and of the practice of phonology that is not connected to letters. Read more <a href="https://www.nomanis.com.au/blog/single-post/i-think-i-was-wrong-about-phonemic-awareness">here</a></em></p>

<p><img src="https://i.snap.as/kmYLbFo4.png" alt="Graphic from Is It Ever Too Late to Teach an Older Struggling Reader? Using Diagnostic Assessment to Determine Appropriate Intervention by Carrie Thomas Beck"/>
<em>Graphic from <a href="https://www.corelearn.com/newsletter/reading-expert-winter-2020/">Is It Ever Too Late to Teach an Older Struggling Reader? Using Diagnostic Assessment to Determine Appropriate Intervention</a> by Carrie Thomas Beck</em></p>

<p>On the trajectory of beginning reading skills, onset-rime practice may possibly provide an onramp, though this is contested and some (I think convincingly) argue that focusing on phonemic awareness first and foremost is better bang for the buck. But after phonics instruction has begun and students have acquired their letter sounds to proficiency and are learning the various generalizations and irregularities of the English language in print, I believe that rime units have a critical role to play, along with beginning inflectional morphology like the plural ‘s’, past tense, ‘ed’, etc.</p>

<p>Why is this? It’s because as readers develop fluency in decoding unknown words, they also began to develop greater efficiency in moving between smaller and larger grain sizes within words. For example, a 3rd grade reader encountering a new multisyllabic word in an informational text, such as “additional,” will slow themselves down and pay attention to the word parts, using their knowledge of syllabication and morphemes and word families as needed to break it up and recognize its sounds and meaning.</p>

<p>So gaining proficiency in advanced phonemic awareness alongside onset-rime and morphological awareness can potentially boost those students who are showing up in 2nd, 3rd, and 4th grades as struggling readers, even if they have received systematic phonics instruction K-1.</p>

<p>Here’s a few pieces of research aligning with my claims:</p>
<ul><li><a href="https://psycnet.apa.org/buy/2004-22408-001">Reading Acquisition, Developmental Dyslexia, and Skilled Reading Across Languages: A Psycholinguistic Grain Size Theory</a></li>
<li><a href="https://journals.sagepub.com/doi/full/10.1080/17470218.2017.1314515#focusIdbibr99-17470218.2017.1314515">Orthographic processing: A ‘mid-level’ vision of reading: The 44th Sir Frederic Bartlett Lecture</a></li>
<li>David Kilpatrick’s “phonemic proficiency hypothesis” (read pretty much anything by him to learn more about this, he has tons of lectures posted online as well. His <a href="https://www.amazon.com/Essentials-Preventing-Overcoming-Difficulties-Psychological/dp/1118845242/ref=as_li_ss_tl?crid=T6216RGPVB46&amp;keywords=essentials+of+assessing+preventing+and+overcoming+reading+kilpatrick&amp;qid=1572811076&amp;sprefix=essential+for+kilpat,aps,159&amp;sr=8-3&amp;linkCode=sl1&amp;tag=readingsimpli-20&amp;linkId=6f93765ef7c54836e83cb21b3e622657&amp;language=en_US_">Essentials book</a> is essential reading indeed</li></ul>

<p>Don’t agree? Fire away! But one thing I want to stress is that you consider the student populations that have been assessed or worked with in your experience or research. Are they historically marginalized and underserved populations? Are they learning English as a new language? Are they struggling with a learning disability? I’m less interested in arguments that center students who typically benefit from the existing methods of instruction.</p>

<p><em>Since writing this, I have changed and revised my thinking about the teaching of phonemic awareness and of the practice of phonology that is not connected to letters. Read more <a href="https://www.nomanis.com.au/blog/single-post/i-think-i-was-wrong-about-phonemic-awareness">here</a></em></p>

<p><a href="https://languageandliteracy.blog/tag:phonemicawareness" class="hashtag"><span>#</span><span class="p-category">phonemicawareness</span></a> <a href="https://languageandliteracy.blog/tag:phonology" class="hashtag"><span>#</span><span class="p-category">phonology</span></a> <a href="https://languageandliteracy.blog/tag:sounds" class="hashtag"><span>#</span><span class="p-category">sounds</span></a> <a href="https://languageandliteracy.blog/tag:speech" class="hashtag"><span>#</span><span class="p-category">speech</span></a> <a href="https://languageandliteracy.blog/tag:reading" class="hashtag"><span>#</span><span class="p-category">reading</span></a> <a href="https://languageandliteracy.blog/tag:literacy" class="hashtag"><span>#</span><span class="p-category">literacy</span></a> <a href="https://languageandliteracy.blog/tag:language" class="hashtag"><span>#</span><span class="p-category">language</span></a> <a href="https://languageandliteracy.blog/tag:neuroscience" class="hashtag"><span>#</span><span class="p-category">neuroscience</span></a> <a href="https://languageandliteracy.blog/tag:research" class="hashtag"><span>#</span><span class="p-category">research</span></a></p>
]]></content:encoded>
      <guid>https://languageandliteracy.blog/whole-to-part-to-whole</guid>
      <pubDate>Sun, 03 Jan 2021 10:11:34 +0000</pubDate>
    </item>
    <item>
      <title>The Riches of ASHA</title>
      <link>https://languageandliteracy.blog/the-riches-of-asha?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[In another post, I wrote about the riches of Speech-Language Pathology and what this domain of research and practice has to offer for all educators.&#xA;&#xA;I&#39;d also like to highlight that relatedly, the American Speech-Language-Hearing Association (ASHA) and it&#39;s publications has a lot to offer to those of us getting into the Science of Reading.&#xA;&#xA;Let me just give you a recent example: the &#34;JSLHR Research Symposium Forum: Advances in Specific Language Impairment Research and Intervention&#34; offers some really interesting and useful open access research. Here&#39;s some tidbits:&#xA;&#xA;There&#39;s a useful overview of dyslexia and DLD/SLI from Suzanne Adlof that stresses the need to screen and diagnose language for students who have demonstrated word reading problems because DLD and dyslexia often co-occur&#xA;&#xA;  &#34;Considering the frequent comorbidity of dyslexia and SLI, all school-aged children who are identified with word reading problems should receive a thorough language evaluation.&#34; --Suzanne Adlof&#xA;&#xA;Spaced retrieval practice has gotten a lot of attention from ResearchEd type folks over the last few years (as it should), and so this piece on its benefits to word learning for students with SLI will be further reaffirming.&#xA;&#xA;I found this one by Pamela Hadley on &#34;Exploring Sentence Diversity at the Boundary of Typical and Impaired Language Abilities&#34; especially useful, as while I am fully invested in explicit sentence-level instruction, I sometimes struggle to know exactly what to investigate and unpack in a sentence beyond the basics. In this paper, Hadley provides a neat way to think of linguistic development at the sentence-level:&#xA;&#xA;  &#34;...as a series of four developmental steps: words, verbs, childlike sentences, and adult sentences.&#34; &#xA;&#xA;What she also highlights is how important verbs are as a developmental stage, given the complexity of the function of verbs in a sentence:&#xA;&#xA;  &#34;Verbs carry information about the number of participants in an event and the semantic roles of those participants.&#34;&#xA;&#xA;And much more in there to think about!&#xA;&#xA;#ASHA #speech #language #literacy #DLD #dyslexia #learning #children #multilingualism #research]]&gt;</description>
      <content:encoded><![CDATA[<p>In <a href="https://languageandliteracy.blog/the-riches-of-speech-language-pathology">another post</a>, I wrote about the riches of Speech-Language Pathology and what this domain of research and practice has to offer for all educators.</p>

<p>I&#39;d also like to highlight that relatedly, the American Speech-Language-Hearing Association (ASHA) and it&#39;s publications has a lot to offer to those of us getting into the Science of Reading.</p>

<p>Let me just give you a recent example: the “JSLHR Research Symposium Forum: Advances in Specific Language Impairment Research and Intervention” offers some really interesting and useful open access research. Here&#39;s some tidbits:</p>
<ul><li>There&#39;s a useful overview of dyslexia and DLD/SLI from Suzanne Adlof that stresses the need to screen and diagnose language for students who have demonstrated word reading problems because DLD and dyslexia often co-occur</li></ul>

<blockquote><p>“Considering the frequent comorbidity of dyslexia and SLI, all school-aged children who are identified with word reading problems should receive a thorough language evaluation.” —Suzanne Adlof</p></blockquote>
<ul><li><p>Spaced retrieval practice has gotten a lot of attention from ResearchEd type folks over the last few years (as it should), and so <a href="https://pubs.asha.org/doi/10.1044/2020_JSLHR-20-00006">this piece</a> on its benefits to word learning for students with SLI will be further reaffirming.</p></li>

<li><p>I found this one by Pamela Hadley on <a href="https://pubs.asha.org/doi/10.1044/2020_JSLHR-20-00031">“Exploring Sentence Diversity at the Boundary of Typical and Impaired Language Abilities”</a> especially useful, as while I am fully invested in explicit sentence-level instruction, I sometimes struggle to know exactly what to investigate and unpack in a sentence beyond the basics. In this paper, Hadley provides a neat way to think of linguistic development at the sentence-level:</p></li></ul>

<blockquote><p>”...as a series of four developmental steps: words, verbs, childlike sentences, and adult sentences.”</p></blockquote>

<p>What she also highlights is how important verbs are as a developmental stage, given the complexity of the function of verbs in a sentence:</p>

<blockquote><p>“Verbs carry information about the number of participants in an event and the semantic roles of those participants.”</p></blockquote>

<p>And much more in there to think about!</p>

<p><a href="https://languageandliteracy.blog/tag:ASHA" class="hashtag"><span>#</span><span class="p-category">ASHA</span></a> <a href="https://languageandliteracy.blog/tag:speech" class="hashtag"><span>#</span><span class="p-category">speech</span></a> <a href="https://languageandliteracy.blog/tag:language" class="hashtag"><span>#</span><span class="p-category">language</span></a> <a href="https://languageandliteracy.blog/tag:literacy" class="hashtag"><span>#</span><span class="p-category">literacy</span></a> <a href="https://languageandliteracy.blog/tag:DLD" class="hashtag"><span>#</span><span class="p-category">DLD</span></a> <a href="https://languageandliteracy.blog/tag:dyslexia" class="hashtag"><span>#</span><span class="p-category">dyslexia</span></a> <a href="https://languageandliteracy.blog/tag:learning" class="hashtag"><span>#</span><span class="p-category">learning</span></a> <a href="https://languageandliteracy.blog/tag:children" class="hashtag"><span>#</span><span class="p-category">children</span></a> <a href="https://languageandliteracy.blog/tag:multilingualism" class="hashtag"><span>#</span><span class="p-category">multilingualism</span></a> <a href="https://languageandliteracy.blog/tag:research" class="hashtag"><span>#</span><span class="p-category">research</span></a></p>
]]></content:encoded>
      <guid>https://languageandliteracy.blog/the-riches-of-asha</guid>
      <pubDate>Fri, 23 Oct 2020 00:08:43 +0000</pubDate>
    </item>
    <item>
      <title>The Riches of Speech-Language Pathology</title>
      <link>https://languageandliteracy.blog/the-riches-of-speech-language-pathology?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[When I was a special education teacher, I also coordinated the IEPs (Individualized Education Programs) for my school, and served as the district representative at our IEP meetings, meaning that I had some part in most of the IEPs written in my building, whether I coordinated the gathering of information or facilitated the meeting with parents.&#xA;&#xA;We served some children identified with speech language impairment (SLI), and I worked pretty closely with the speech-language pathologist in my school in the sense that I always ensured that IEPs were written with her review and meaningful input, and she was invited to IEP meetings for the children she worked with. We talked when we could about the children we serviced, and I solicited her advice on many occasions.&#xA;&#xA;Yet I don’t know if I ever fully understood what she really did in speech-language therapy sessions. She did her thing, and I did my thing as a co-teacher in 6th, 7th, and 8th grade ELA classrooms. We were both pretty busy.&#xA;&#xA;As I’ve been learning much more about reading, literacy, and language, I’ve increasingly become drawn into the research and expertise of the speech-language pathology realm (SLP) (we do love our tripartite acronyms in ed, don’t we), and discovered a wealth of knowledge that I really wish I had understood more of when I was in the classroom and coordinating the development of IEPs.&#xA;&#xA;Also, as I’ve been struggling to bridge what I’ve been learning about the “science of reading” with my newer focus on the interconnections between language development and literacy development, I’ve found SLPs to be an incredibly useful resource to building that bridge.&#xA;&#xA;You see, if you know all about the Simple View of Reading framework (SVR), you then know that language comprehension at large, alongside of decoding and word-level recognition, is a huge component of reading ability--the one that is there from the beginning, but then takes on an outsized importance once fluency with decoding is achieved.&#xA;&#xA;The Simple View of Reading&#xA;&#xA;And Speech Language Pathology is all about the subcomponents of language comprehension, from explicit training in the articulation of speech sounds, to explicit intervention to target needed language skills, such as knowledge of story grammar, making inferences, or the talk moves that are needed to have discourse about a text.&#xA;&#xA;It was only recently that I became aware of the term Developmental Language Disorder (DLD), and discovered that there’s a wealth of developing knowledge about DLD that could further inform our assessment, instruction, and intervention of children who need more intensive supports in any of those subcomponents of language.&#xA;&#xA;If we refer back to the SVR, we can think of three main patterns of students who are having trouble learning to read: students who have difficulty with language comprehension, students who have difficulty decoding, or students have difficulty with both:&#xA;&#xA;A graphic showing the equation of Language Comprehension X Decoding = Reading Comprehension, with a struggle in LC as DLD, and a struggle in Decoding as Dyslexia.&#xA;&#xA;Students may have difficulty reading due to either language comprehension, decoding, or both.&#xA;&#xA;Awareness of dyslexic patterns have grown quite a bit, to the point that legislation addressing it has arisen in multiple states. But awareness of patterns of DLD remains low in comparison.&#xA;&#xA;It may seem strange that I present DLD and dyslexia as defining student profiles to guide overall education assessment and instruction — but as someone who comes from a SPED stance, I’ve always seen the way we typically think of instruction in schools as backward. As a cornerstone, we should center our focus on the students who may struggle with language and literacy the most and plan forward from there, rather than as an afterthought. We would then be able to improve outcomes for many more children who may not struggle as significantly, yet who also require more explicit support or more opportunities for practice. Instead, we design schools to center students who already have academic language and literacy skills in place, and we widen inequitable outcomes.&#xA;&#xA;So with that in mind, speech-language pathology is an undervalued domain that has much to offer in considering the language needs of our students and what we need to do to screen, diagnose, and intervene to address those needs. Rather than relegating speech-language pathologists to the people who do that esoteric intervention thing in the room over there 3x a week with some children, we should be elevating their expertise and knowledge and seeking to disseminate that knowledge to general education teachers, most especially in earlier grades, so that we can seek to prevent language issues from arising.&#xA;&#xA;I feel fortunate to have discovered many SLPs and researchers are active on social media and other venues beyond research papers, and though I hesitate to call any out by name because I know I will be missing way too many in any listing I give, just a few to get you started in your own journey of learning on language:&#xA;&#xA;Tiffany Hogan: check out her co-authored paper with Suzanne Adlof on the intersections of dyslexia and DLD, and she has a podcast! A great list of ones on DLD related issues here&#xA;Elsa Cárdenas-Hagan is a bilingual SLP who brings a structured literacy lens to supporting English learners with foundational skills in reading and writing, in ways that honor and leverage their home language. Check out her book and her website. Her paper on Cross-Language Connections for ELs is a solid resource I keep coming back to.&#xA;Trina Spencer: one of the co-authors of the CUBED assessments, which is now one of my go-to recommendations for a screener/diagnostic for foundational skills related to listening comprehension. If you’re wondering what SLP might be able to offer in our teaching of narratives, check out her co-authored paper on narrative interventions. Also check out her website with a ton of resources for instruction and intervention.&#xA;Elizabeth D. Peña and Habla Lab: understanding the intersections of bilingual and multilingualism with DLD is a critical area of need. Check out the blog (NOTE: it may not be updated anymore). I learned a lot about the concept of “dynamic assessment” from them.&#xA;Julie Washington: leading the charge to bring explicit attention to African American English and how the use of the vernacular relates to literacy development and instructional opportunity. Check out the article on her in The Atantic and her co-authored paper with Mark Seidenberg on teaching reading to African American children in American Educator&#xA;Cate Crowley: she leads the LEADERSproject at Columbia -- lots of resources are on hand regarding evaluation and intervention for culturally and linguistically diverse children. I am a big fan of the freely available SLAM cards she has made available for language sampling and have been testing these out with some of my own sampling methods -- but you can go right ahead and leverage the already made SLAM Guidelines for Analysis for each SLAM card&#xA;Lisa Archibald: Dr. Archibald goes deep into cognition and memory and their intersections with language. Whenever I&#39;ve put out some questions into the Twitterverse (before Musk trashed it), she has offered guidance and food for thought.&#xA;&#xA;There’s so many more SLPs out there to list here, so please view just view this as a place to get started if you&#39;re interested in these topics …&#xA;&#xA;Dig in! Speech-language pathology has a lot to offer those of us who are just beginning on our journeys to understanding language and literacy.&#xA;&#xA;#SLPs #speech #language #literacy #DLD #bidialectalism #multilingualism #learning #research #SVR]]&gt;</description>
      <content:encoded><![CDATA[<p>When I was a special education teacher, I also coordinated the IEPs (Individualized Education Programs) for my school, and served as the district representative at our IEP meetings, meaning that I had some part in most of the IEPs written in my building, whether I coordinated the gathering of information or facilitated the meeting with parents.</p>

<p>We served some children identified with speech language impairment (SLI), and I worked pretty closely with the speech-language pathologist in my school in the sense that I always ensured that IEPs were written with her review and meaningful input, and she was invited to IEP meetings for the children she worked with. We talked when we could about the children we serviced, and I solicited her advice on many occasions.</p>

<p>Yet I don’t know if I ever fully understood what she really did in speech-language therapy sessions. She did her thing, and I did my thing as a co-teacher in 6th, 7th, and 8th grade ELA classrooms. We were both pretty busy.</p>

<p>As I’ve been learning much more about reading, literacy, and language, I’ve increasingly become drawn into the research and expertise of the speech-language pathology realm (SLP) (we do love our tripartite acronyms in ed, don’t we), and discovered a wealth of knowledge that I really wish I had understood more of when I was in the classroom and coordinating the development of IEPs.</p>

<p>Also, as I’ve been struggling to bridge what I’ve been learning about the “science of reading” with my newer focus on the interconnections between language development and literacy development, I’ve found SLPs to be an incredibly useful resource to building that bridge.</p>

<p>You see, if you know all about the Simple View of Reading framework (SVR), you then know that language comprehension at large, alongside of decoding and word-level recognition, is a huge component of reading ability—the one that is there from the beginning, but then takes on an outsized importance once fluency with decoding is achieved.</p>

<p><img src="https://i.snap.as/WCr89E7R.png" alt="The Simple View of Reading"/></p>

<p>And Speech Language Pathology is all about the subcomponents of language comprehension, from explicit training in the articulation of speech sounds, to explicit intervention to target needed language skills, such as knowledge of story grammar, making inferences, or the talk moves that are needed to have discourse about a text.</p>

<p>It was only recently that I became aware of the term Developmental Language Disorder (DLD), and discovered that there’s a wealth of developing knowledge about DLD that could further inform our assessment, instruction, and intervention of children who need more intensive supports in any of those subcomponents of language.</p>

<p>If we refer back to the SVR, we can think of three main patterns of students who are having trouble learning to read: students who have difficulty with language comprehension, students who have difficulty decoding, or students have difficulty with both:</p>

<p><img src="https://i.snap.as/QmSktb87.webp" alt="A graphic showing the equation of Language Comprehension X Decoding = Reading Comprehension, with a struggle in LC as DLD, and a struggle in Decoding as Dyslexia."/></p>

<p>Students may have difficulty reading due to either language comprehension, decoding, or both.</p>

<p>Awareness of dyslexic patterns have grown quite a bit, to the point that legislation addressing it has arisen in multiple states. But awareness of patterns of DLD remains low in comparison.</p>

<p>It may seem strange that I present DLD and dyslexia as defining student profiles to guide overall education assessment and instruction — but as someone who comes from a SPED stance, I’ve always seen the way we typically think of instruction in schools as backward. As a cornerstone, we should center our focus on the students who may struggle with language and literacy the most and plan forward from there, rather than as an afterthought. We would then be able to improve outcomes for many more children who may not struggle as significantly, yet who also require more explicit support or more opportunities for practice. Instead, we design schools to center students who already have academic language and literacy skills in place, and we widen inequitable outcomes.</p>

<p>So with that in mind, speech-language pathology is an undervalued domain that has much to offer in considering the language needs of our students and what we need to do to screen, diagnose, and intervene to address those needs. Rather than relegating speech-language pathologists to the people who do that esoteric intervention thing in the room over there 3x a week with some children, we should be elevating their expertise and knowledge and seeking to disseminate that knowledge to general education teachers, most especially in earlier grades, so that we can seek to prevent language issues from arising.</p>

<p>I feel fortunate to have discovered many SLPs and researchers are active on social media and other venues beyond research papers, and though I hesitate to call any out by name because I know I will be missing way too many in any listing I give, just a few to get you started in your own journey of learning on language:</p>
<ul><li><a href="https://twitter.com/tiffanyphogan">Tiffany Hogan</a>: check out her co-authored paper with <a href="https://twitter.com/SuzAdlof">Suzanne Adlof</a> on the intersections of dyslexia and DLD, and she has <a href="https://www.seehearspeakpodcast.com/">a podcast</a>! A great list of ones on DLD related issues <a href="https://twitter.com/tiffanyphogan/status/1317159553528696837">here</a></li>
<li><a href="https://www.valleyspeech.org/about">Elsa Cárdenas-Hagan</a> is a bilingual SLP who brings a structured literacy lens to supporting English learners with foundational skills in reading and writing, in ways that honor and leverage their home language. Check out <a href="https://www.amazon.com/Literacy-Foundations-English-Learners-Evidence-based/dp/1598579657/ref=asc_df_1598579657/?tag=hyprod-20&amp;linkCode=df0&amp;hvadid=459662101081&amp;hvpos=&amp;hvnetw=g&amp;hvrand=15434600804564373976&amp;hvpone=&amp;hvptwo=&amp;hvqmt=&amp;hvdev=c&amp;hvdvcmdl=&amp;hvlocint=&amp;hvlocphy=9028306&amp;hvtargid=pla-943194366619&amp;psc=1">her book</a> and her <a href="https://www.valleyspeech.org/">website</a>. Her paper on <a href="https://eric.ed.gov/?id=EJ1188155">Cross-Language Connections for ELs</a> is a solid resource I keep coming back to.</li>
<li><a href="https://twitter.com/TrinaDSpencer2">Trina Spencer</a>: one of the co-authors of the CUBED assessments, which is now one of my go-to recommendations for a screener/diagnostic for foundational skills related to listening comprehension. If you’re wondering what SLP might be able to offer in our teaching of narratives, check out her co-authored <a href="https://pubs.asha.org/doi/10.1044/2020_LSHSS-20-00015">paper on narrative interventions</a>. Also check out <a href="http://www.trinastoolbox.com/">her website</a> with a ton of resources for instruction and intervention.</li>
<li>Elizabeth D. Peña and <a href="https://twitter.com/HABLAlab">Habla Lab</a>: understanding the intersections of bilingual and multilingualism with DLD is a critical area of need. Check out <a href="https://2languages2worlds.wordpress.com/">the blog</a> (NOTE: it may not be updated anymore). I learned a lot about the concept of “dynamic assessment” from them.</li>
<li><a href="https://education.uci.edu/washington_bio.html">Julie Washington</a>: leading the charge to bring explicit attention to African American English and how the use of the vernacular relates to literacy development and instructional opportunity. Check out the article on her in <a href="https://www.theatlantic.com/magazine/archive/2018/04/the-code-switcher/554099/">The Atantic</a> and her co-authored paper with Mark Seidenberg on teaching reading to African American children in <a href="https://www.aft.org/ae/summer2021/washington_seidenberg">American Educator</a></li>
<li>Cate Crowley: she leads the LEADERSproject at Columbia — <a href="https://www.leadersproject.org/">lots of resources</a> are on hand regarding evaluation and intervention for culturally and linguistically diverse children. I am a big fan of the freely available <a href="https://www.leadersproject.org/school-age-language-assessment-measures-slam/">SLAM cards</a> she has made available for language sampling and have been testing these out with some of my own sampling methods — but you can go right ahead and leverage the already made <a href="https://www.leadersproject.org/2023/01/03/slam-guidelines-for-analysis/">SLAM Guidelines for Analysis</a> for each SLAM card</li>
<li><a href="https://www.uwo.ca/fhs/csd//about/faculty/archibald_l.html">Lisa Archibald</a>: Dr. Archibald goes deep into cognition and memory and their intersections with language. Whenever I&#39;ve put out some questions into the Twitterverse (before Musk trashed it), she has offered guidance and food for thought.</li></ul>

<p>There’s so many more SLPs out there to list here, so please view just view this as a place to get started if you&#39;re interested in these topics …</p>

<p>Dig in! Speech-language pathology has a lot to offer those of us who are just beginning on our journeys to understanding language and literacy.</p>

<p><a href="https://languageandliteracy.blog/tag:SLPs" class="hashtag"><span>#</span><span class="p-category">SLPs</span></a> <a href="https://languageandliteracy.blog/tag:speech" class="hashtag"><span>#</span><span class="p-category">speech</span></a> <a href="https://languageandliteracy.blog/tag:language" class="hashtag"><span>#</span><span class="p-category">language</span></a> <a href="https://languageandliteracy.blog/tag:literacy" class="hashtag"><span>#</span><span class="p-category">literacy</span></a> <a href="https://languageandliteracy.blog/tag:DLD" class="hashtag"><span>#</span><span class="p-category">DLD</span></a> <a href="https://languageandliteracy.blog/tag:bidialectalism" class="hashtag"><span>#</span><span class="p-category">bidialectalism</span></a> <a href="https://languageandliteracy.blog/tag:multilingualism" class="hashtag"><span>#</span><span class="p-category">multilingualism</span></a> <a href="https://languageandliteracy.blog/tag:learning" class="hashtag"><span>#</span><span class="p-category">learning</span></a> <a href="https://languageandliteracy.blog/tag:research" class="hashtag"><span>#</span><span class="p-category">research</span></a> <a href="https://languageandliteracy.blog/tag:SVR" class="hashtag"><span>#</span><span class="p-category">SVR</span></a></p>
]]></content:encoded>
      <guid>https://languageandliteracy.blog/the-riches-of-speech-language-pathology</guid>
      <pubDate>Wed, 21 Oct 2020 22:37:15 +0000</pubDate>
    </item>
    <item>
      <title>The Influence of Acoustics on Learning</title>
      <link>https://languageandliteracy.blog/the-influence-of-acoustics-on-learning?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[  With a classroom having good acoustical characteristics, learning is easier, deeper, more sustained, and less fatiguing.&#xA;&#xA;  —The Acoustical Society of America, in its introduction to Acoustical Performance Criteria, Design Requirements, and Guidelines for Schools, Part 1: Permanent Schools&#xA;&#xA;When I first moved to NYC from California, I was taken aback by the unceasing din. In our first apartment, my wife and I were treated to an all-night alleyway party each weekend by our downstairs neighbors. In desperation, we bought a white noise machine, but this proved to be a mostly futile gesture.&#xA;&#xA;Our second apartment was perched above a popular nightspot, which considerately recycled its beer bottles outside our bedroom window at three AM every morning. We got an additional white noise machine and put up layers of cardboard against the windows. But outside of professional acoustical treatment, there’s no hiding the intense, high decibel sound of twenty-five gallons of beer sodden glass bottles slamming repeatedly into their brethren as they are dumped into a bin.&#xA;&#xA;!--more--&#xA;&#xA;So I well know how noise can impact a person’s well-being. But maybe I’m just overly sensitive. I see people everyday move along unphased as truck horns blast in their face. Humans are a highly adaptive species, after all, and we go on about our business with a cacophony of ambient noise hovering about us like smog.&#xA;&#xA;While many of us may consider noise to be a minor nuisance (except, perhaps, when we are trying to sleep), it can have a profound impact on our health. Studies have shown that constant exposure to noise, such as living near a highway or airport, can lead not only to loss of sleep, but hypertension, higher blood pressure, Type 2 diabetes, heart problems, and lower birth weight, just to name a few long-term consequences (Casey, James, &amp; Morello-Frosch, 2017).&#xA;&#xA;There’s a lot more we could say about the impact of sound on health, whether physical or psychological. But most pertinent to our focus are the findings on how harmful noise can be for learning, most especially in the places where learning should be what is held most sacred: our schools.&#xA;&#xA;Given that children are far more vulnerable to noise than adults (Klatte, Lachmann, &amp; Meis, 2010), ensuring that children have a learning environment unthreatened by noise should be a commonsense goal. Yet how frequently have you heard an education reformer discuss acoustical treatment as a school or district improvement initiative? […silence... crickets… beeping horns...]&#xA;&#xA;Never? Exactly. So let’s start talking about it. In this post, we’ll examine the impact of noise on students and on teachers, then consider what we can do about it.&#xA;&#xA;The Impact of Noise on Student Learning&#xA;Chronic Noise: What You Hear Is How You Learn&#xA;&#xA;The natural sensitivity that the human ear has to auditory vibration is not a primary design consideration for most schools. Surfaces are hard. Corners are sharp. Floors are tiled. Every squeak of a sneaker, shout of a hormonally charged teen, recurring bell at the end of a period, each slam of a heavy door can be amplified, reverberating throughout the building and, especially during fire drills, deep into the marrow of one’s bones. &#xA;&#xA;In 1975, a seminal study by Arline Bronzaft and Dennis McCarthy at a school in NYC illuminated the impact of noise on learning. At Public School 98 in uptown Manhattan, the elevated 1 train roared by a mere two hundred feet from the southeast side of the building every few minutes, disrupting learning in nearby classrooms.&#xA;&#xA;One can imagine a teacher hovering exasperated in mid-sentence for the 6th time that period, awaiting the rumbling clatter of the train to recede into the distance, her students’ flighty attention ebbing away just as rapidly.&#xA;&#xA;PS 98 in Inwood, Manhattan on an overcast spring day in 2018.&#xA;&#xA;Four year’s worth of data show that students located in classrooms nearest the train had far lower reading scores than those on the quieter side, lagging behind by three months to as much as a year. &#xA;&#xA;Imagine if your child were placed in a classroom where they might lose an entire year’s worth of reading ability simply due to where the classroom happened to be located in the building.&#xA;&#xA; To their credit, these results were significant enough to spur the NYC Board of Education and MTA into immediate action (a verb phrase not commonly associated with either entity). The school installed special tiling in the classrooms facing the train, and the Metropolitan Transit Authority insulated the tracks adjacent to the school. In a follow-up study, students across all classrooms were found to have comparable reading levels as a result of these simple interventions (Bronzaft, 1981). This demonstrates that the delay in student reading ability was not due to the quality of the staff nor curriculum, but solely to the location of the classrooms in the building.&#xA;&#xA;Later studies have further revealed the impact of noise and school acoustics. Chronic exposure to noise, such as residing near a highway or airport, impairs a child’s ability to learn. In 1973, researchers measured the reading and auditory processing abilities of children living on different floors of the Bridge Apartments, a quartet of 32 story high rises bestride one of the busiest highways in uptown Manhattan (Cohen, Glass, &amp; Singer). They found that children living on lower floors, in greater proximity to the unrelenting noise of the highway, had lower reading scores compared to those on higher floors. The longer a child lived on a lower floor, the greater the gap.&#xA;&#xA; This is a vivid visualization of how where a child lives can either expand or inhibit their opportunities to learn—down to the floor they may happen to live on within the very same building. Noise pollution, as with pollution of other sorts, is worse in neighborhoods segregated by race or class (Casey et al., 2017). Schools serving primarily poor, Black, or Latinx communities thus tend to have greater amounts of ambient noise, which most likely means their classrooms will also be noisier—unless those spaces are constructed with materials that absorb external noise, or are belatedly given acoustical treatment (we’ll look more at how to fight noise in a minute).&#xA;&#xA;In a series of studies of schools located near airports in New York, Munich, the Netherlands, Spain, and the UK, cognitive tasks requiring memory and language processing, such as learning a word list, were impaired by aircraft noise, as were reading comprehension scores and auditory tests of speech perception (Evan &amp; Maxwell, 1997; Hygge, Evans, &amp; Bullinger, 2000; Stansfeld et al., 2005).&#xA;&#xA;In sum, noise makes it harder for all children to hear, to read, and to remember.&#xA;&#xA;Now is probably a good time to highlight the strong connection between reading comprehension and auditory learning. While our visual system is clearly an important component of reading, comprehension of the written word is founded on the ability to discern the sounds that letters and words are composed of. For children that struggle with reading, especially those classified as dyslexic, their difficulty is closely related to trouble with auditory processing (Hornickel &amp; Kraus, 2013).&#xA;&#xA;All children should learn in environments in which speech can be clearly heard, but it is especially critical for young children, children with hearing impairments, learning challenges, or learners of a new language (Kristiansen et al., 2011).&#xA;&#xA;The Importance of the Intelligibility of Speech&#xA;&#xA;The connection between noise and learning makes a lot of sense when you consider that speech and language are central to most classroom instruction. The more difficult it is to discern individual words, the more cognitive energy a brain must exert to fill in the gaps, drawing from prior knowledge and context (Pichora-Fuller, 2007). This is similar to the challenge a struggling reader faces when they expend more cognitive energy decoding words rather than deciphering meaning. And while you may be able to hold a conversation in the middle of a dance club and understand what your inebriated friend is saying, don’t expect that children can so easily fill in the blanks. Even those adults who are deaf—and are therefore experienced lip readers—do not recognize a large majority of words spoken to them without signing (Altieri, Pisoni, &amp; Townsend, 2011).&#xA;&#xA;A 2000 report, “Classroom Acoustics: A Resource for Creating Environments with Desirable Listening Conditions,” framed the difficulty children face in understanding classroom speech thus:&#xA;&#xA;  In many classrooms in the United States, the speech intelligibility rating is 75 percent or less. That means that, in speech intelligibility tests, listeners with normal hearing can understand only 75 percent of the words read from a list. Imagine reading a textbook with every fourth word missing, and being expected to understand the material and be tested on it. Sounds ridiculous? Well, that is exactly the situation facing students every day in schools all across the country. (Seep et al., 2000)&#xA;&#xA;Let me repeat that claim above again in a different way to stress this point: in many classrooms, due to poor acoustics, children may not understand 25% or more of the words their teacher speaks. It’s hard to verify a statistic like that, but there have been some surveys of acoustics across many schools in the world, and what is clear is that the acoustical quality of schools and classrooms can vary quite dramatically (Mealings, 2016).&#xA;&#xA;Given that the majority of learning in most classrooms is based upon speech, you might conclude that acoustics would be one of the primary concerns of classroom design. You may also think that it would be one of the first things a school leader considers when evaluating the learning conditions of their classrooms. The reality is that acoustical design is rarely considered due to cost and complexity, and noise continues to be dismissed as a minor nuisance. Meanwhile, children are sitting in classrooms where they miss a substantial portion of what their teacher says each day, due to no fault nor inattention of their own.&#xA;&#xA;Noise Begets Noise&#xA;&#xA;But it is not only chronic, deafening noise from cars, airplanes, and trains that can impair learning. The background noise within a classroom can also be harmful. In a 2006 study, 158 eight-year-olds were randomly assigned to classrooms with three different noise conditions and administered tests: normal sound levels during testing (no talking), a constant stream of children babbling (around 65 decibels), and classroom babble with intermittent external noise events, like sirens (Dockrell &amp; Shield). One of their findings was that classroom chatter can have a detrimental effect on student performance on both verbal and nonverbal tasks. Which should surprise exactly no one who has ever tried to concentrate while others around them were gabbing.&#xA;&#xA;In a survey of secondary schools in London, researchers measured the unoccupied levels of ambient noise and reverberation in classrooms and compared it against sound levels during lessons across multiple subjects and activities (Shield et al., 2015). They found that sound levels during instruction were related to the acoustical quality of the rooms themselves, and that disruptions to learning, such as students talking or shouting, were correlated to rooms of poorer acoustical quality. Thus, the poorer the acoustical quality of room—as measured before anyone occupies it—the noisier the room is likely to be once kids are in there. And the more likely, as a result, learning will be thrown off track.&#xA;&#xA;This gives us a general principle: when a space is of poor acoustical quality, it is more likely to become noisier once in use. In other words, noise begets noise. And noise tends to lead to less self-regulated behavior. In a bar or a club, maybe that’s a desirable thing. But not in a school.&#xA;&#xA;We can see this everyday in school cafeterias. Cafeterias can be some of the worst acoustical offenders, becoming deafeningly loud, which is hardly surprising given they are chock full of hungry students with pent up energy socializing in giant rectangular spaces rife with reflective surfaces. Rather than a respite after a long morning of thinking and learning, lunch breaks instead become a time of sensory overload due to poor acoustics. If it’s an option, students sensitive to noise seek to escape to the calm, restorative environment of the classroom of their favorite teacher instead.&#xA;&#xA;All teachers dread the class that must be taught after lunch. That’s the period when kids come in buzzing with the latest scuttlebutt. In my first years of teaching in a self-contained 5th grade classroom, I learned that transitioning my students into academic learning too swiftly after lunch would result in no academic learning. It was as if they needed a break from their lunch break. Looking back, I wonder how much the frayed nerves of my students can be attributed to a lunch period spent in a noisy basement with terrible acoustics?&#xA;&#xA;Because noise doesn’t only hinder learning. It also causes fatigue. Earlier, we discussed how when it is harder to hear, our brains must work harder to fill in the blanks. This can not only be taxing, but furthermore cause us to miss subtle cues and thus have distorted perceptions in social situations (Anderson, 2001). When you consider the trouble that adolescents already have with self-image and complex social situations, now consider the fisheye effect of cafeteria noise. It’s a disaster waiting to happen.&#xA;&#xA;Another overlooked area of poor acoustics are school stairwells. Again, these tend to be filled with hard, reverberant surfaces that echo with the scuffle of sneakers and shouts. Some stairwells, like the one at a Bronx middle school in the picture below, carry sound across multiple floors. As groups of students traverse the stairs, noise magnifies. Students grow louder as they enter the stairwell in an effort to be heard, demonstrating the power of the signal-to-noise ratio in real-time.&#xA;&#xA;Signal-to-noise ratio is the audibility of what you want to hear (such as someone’s voice) against background noise. In order for speech to be intelligible, a positive signal-to-noise ratio must be maintained, which is why we tend to raise our voices when there is more noise around us.&#xA;&#xA;This Bronx middle school stairwell becomes a sea of noise when students use it.&#xA;&#xA;The Impact of Noise on Teachers&#xA;&#xA;Unwittingly, teachers themselves may exacerbate the noise in their classrooms. It is natural to speak more loudly when there is noise around us. Given the prevalence of reverberating sounds in classrooms, compounded by frequent group work and the natural propensity of children to speak loudly and excitedly, teachers invariably end up talking at higher volumes in the battle to make themselves heard. Some teachers operate nearest to a level of hoarse hollering as a matter of normalcy. They may proudly term this their “teacher voice.”&#xA;&#xA; But speaking constantly at higher volumes has consequences: teachers are more susceptible to voice-related issues. Across two studies, one in Sweden and one in the U.S., teachers were “the commonest ‘at risk’ occupation” and “four times more commonly represented clinically than in the population at large” (Williams, 2003). Another study found that poor acoustics and related voice problems reduced teacher well-being, as well as increased absences due to illness (Kristiansen et al., 2011). Teacher voice problems also may have an impact on the economy: one study estimated a cost of $2.5 billion per year to the U.S. economy due to effects such as absences, clinical visits, and medication (Verdolini &amp; Ramig, 2001), while another study of Colombian teachers estimated the cost as potentially up to 37% of a teacher’s monthly wage (Cantor Cutiva &amp; Burdorf, 2015). Even more importantly, a teacher’s vocal impairment can make it difficult for students to understand what a teacher is saying (Rogerson &amp; Dodd, 2005).&#xA;&#xA;Another way of saying all of this is that while it may be obvious that loud, constant, chronic noise hinders learning, the poor acoustical quality of schools and classrooms can also have a cumulatively detrimental impact, both for teachers and for students.&#xA;&#xA;This doesn’t mean that classrooms and schools should all be expected to be hushed rectories where you can hear a pin drop. A certain degree of ambient sound may even support focus, at least for tasks involving creativity (Mehta, Zhu, &amp; Cheema, 2012). I find that I can sometimes be more focused and productive when writing, for example, when I am somewhere with an ambient buzz of social activity and conversation, like at a cafe.&#xA;&#xA;But it does mean that too much noise, whether chronic or acute, external or internal, will make learning significantly more difficult for the students who can least afford to fall behind.&#xA;&#xA;Add to all of this the unceasing calls over a loudspeaker that can interrupt instruction throughout a school day. Well-organized schools ensure such calls are only made when absolutely necessary. At some schools, however, unscheduled announcements cause needless additional noise and constantly impede classroom learning.&#xA;&#xA;Can we quantify the impact of interruptions from loudspeakers? This should be an area for further research. But having been interrupted in the middle of a lesson countless times myself, I can state pretty confidently such interruptions devalue learning. It’s hard enough as it is to maintain the attention of children without additional distractions.&#xA;&#xA;School Design for Acoustics&#xA;&#xA;What is it about a classroom or school that determines the quality of its acoustics? &#xA;&#xA;One key factor is reverberation, which refers to the amount of time that it takes a sound within a room to fade. Too much reverberation intensifies and complicates spoken language, making it harder to understand. Imagine giving a speech in an unfurnished apartment with hardwood floors, as an example. Rather than hearing your words directly, listeners would also hear it reflecting off multiple surfaces, muddying your delivery. But some reverberation time (RT) can also be desirable, especially in a larger space, as sound needs to carry to listeners seated furthest from the speaker (ANSI/ASA, 2010). The more that reflective surfaces are covered over, RT is reduced, such as in a room with a carpet or with tapestries or pictures hanging off the walls.&#xA;&#xA;There are recommended guidelines for how much reverberation is acceptable in a classroom environment. RT is generally measured in unoccupied classrooms through the creation of a sharp, sudden sound, such as by clapping two boards together or popping a balloon (there’s videos on the internet demonstrating measurement of different RTs in classrooms before and after acoustical treatments: search for something like ‘classroom reverberation time’). For hearing impaired children, less than 0.3 seconds of RT is recommended, while 0.4 to 0.6 seconds is recommended for general education classrooms (Mealings, 2016). In a 2001 survey by researchers from the Centers for Disease Control and Prevention, 13% of U.S. children were estimated to have hearing loss due to noise exposure (Chepesiuk, 2005). It therefore seems to me that we would want all our children, regardless of disability, to learn in classrooms with a RT closer to 0.3.&#xA;&#xA;In surveys of school spaces across different countries, actual reverberation times varied dramatically from 0.2 to 1.9 seconds. To put this in context, a RT of 0.2 - 0.5 seconds is akin to what you would get in a recording studio, while a RT of 1.0 - 1.9 seconds would be akin to the amplified echoes of a concert hall. Reverberation time in large spaces is great for performances. But in a classroom, where both individual and group work is the norm, fluttering echoes make concentration and learning all the more difficult.&#xA;&#xA;Fighting Noise in Schools&#xA;&#xA;Acoustical Treatments&#xA;&#xA;For schools that are already built and suffer from poor acoustics—the majority of our schools—there’s investments that can be made in acoustical treatment targeting floors, ceilings, or walls. As you consider which kind of treatment you or your school may want to invest in, bear in mind that absorptive materials work best when spread throughout a room, not concentrated in any one area (Seep et al., 2000).&#xA;&#xA;Carpets and Floors&#xA;&#xA;Carpets are one of the most direct methods to control sound levels. They also provide an area for class gatherings. However, carpets collect dirt and dust and need to be well-maintained, requiring more intensive upkeep than a laminated floor. And as anyone who has worked in a school knows, carpets don’t get replaced often, and rarely steam cleaned, if ever.&#xA;&#xA;If you can’t get a carpet, reduce noise from the daily clatter of moving chairs and desks. While some teachers cut tennis balls and place them under chair and table legs, you may be inadvertently increasing indoor air pollutants. Instead, get “floor savers” (little felt disks) that adhere to chair and table bottoms. For a classroom with 32 chairs, this would cost around $75 at the time of this writing (packs of 24 for $15).&#xA;&#xA;Ceilings&#xA;&#xA;A better target for absorbing reverberation in classrooms is the ceiling, as absorptive ceilings can be more effective at absorbing sound than carpets (Shield et al., 2010). Some schools may already have some form of suspended acoustic ceiling tile, but those tiles may not be high performing enough to reduce reverberation times to adequate levels.&#xA;&#xA;Swapping existing ceiling tiles with higher-performing ceiling panels can go a long way towards reducing reverberation time. Seep et al. recommend tiles of noise reduction coefficient (NRC) values 0.75 or higher (2000), while another source recommends NRC 0.9 or higher (Betz, 2015).&#xA;&#xA;Not all spaces have suspended ceilings, however, and some ceilings are very high. Another method to absorb sound can be to hang acoustical treatments from the ceiling. These can come in various forms and colors, such as cubes, tetrahedrons, or waveforms, and are referred to by nifty names like baffles, clouds, and canopies. These decorative treatments could be well-suited for school hallways or entryways.&#xA;&#xA;For a less expensive DYI approach, the American Speech-Language-Hearing Association recommends “suspending banners, flags, student work, and plants from the ceiling to contribute to the reduction of noise and reverberation” (ASHA, 2015). Your ability to do this will depend on the nature of your ceiling, of course.&#xA;&#xA;Walls&#xA;&#xA;Most classrooms have parallel walls, which means that sound reverberates between them. Even if ceilings are acoustically tiled and floors carpeted, walls can still reflect a lot of sound.&#xA;&#xA;The American Speech-Language-Hearing Association recommends “placing mobile bulletin boards and bookcases at angles to the walls to decrease reverberation” (ASHA, 2015). Another expert also recommends hanging “portable corkboards” at an angle on the walls (Betz, 2015).&#xA;&#xA;Portable corkboards and mobile bulletin boards can be pricey for an individual teacher to purchase, but the basic idea here is that any furniture, like bookcases, with a large surface area can be angled to help diffuse sound and keep it from reverberating back and forth between the walls.&#xA;&#xA;It’s not clear how absorbent cork in general is for sound, but it seems to be a material that could work well as a DIY panel. Collect enough wine corks, and you can make your own corkboard! I don’t know how well these would work as sound absorbers (more research, please!), but might be worth a shot if you can’t afford professional paneling. Other low cost options may be furniture stuffing or denim.&#xA;&#xA;Classroom furnishings in general can help to dull sound, but for larger spaces like science labs, cafeterias, or auditoriums, strategic placement of acoustical wall panels may be necessary. Acoustical panels are made of a foam or fabric that can absorb and diffuse sound.&#xA;&#xA;Teacher and Student Actions&#xA;&#xA;The simplest and most direct approach is to seat students who have the most trouble hearing the closest to the teacher (Klatte, M., Lachmann, T., &amp; Meis, 2010). Generally speaking, students that have an identified problem with hearing will have this recommendation mandated as part of an Individualized Education Program (IEP), but it’s a good rule of thumb to bring students who seem to struggle with retention or following directions closer. It may be an auditory or visual issue that can be supported with proximity.&#xA;&#xA;Think also about how much your instruction relies on auditory learning. Supplement and reinforce auditory learning with tasks and texts (edu jargon: plan for multiple modalities). Use visuals and gestures when speaking, and ensure that directions for tasks are provided on a chart visible from the back of the room or as a handout.&#xA;&#xA;Be aware of the signal-to-noise ratio in your room. When the background noise increases, resist the urge to speak louder. Teach your students to recognize sound levels that are most appropriate for different tasks (i.e. whispers during reading time, “4 inch voices” during partner/group talk, across-the-room projection during class discussions).&#xA;&#xA; As I was drafting this, I facilitated a professional learning session for teachers in a classroom on the historic DeWitt Clinton campus in the Bronx, and because all of this research was fresh on my mind, I was hyper-aware of the terrible acoustics of the room. The ceilings were high and did not have any acoustical tile. All it took was one pair of teachers having a side conversation to raise the overall background noise of the classroom.&#xA;&#xA; When I found myself raising my voice, I spoke to my participants about the acoustical quality of the room to make us all aware of it, and I noticed that framing my request for one voice at a time in this way was also more effective. It depersonalized the refocusing needed during discussions or work time. Building a similar awareness with students in our classrooms can support collective ownership of sound levels in the learning environment, rather than making it incumbent on the teacher to constantly monitor and shush students.&#xA;&#xA;In fact, why not teach students directly about the importance of the impact of noise in the places they learn and live? The NYC Department of Environmental Protection has free resources to teach students about sound and noise available on its website: http://www.nyc.gov/html/dep/html/environmentaleducation/soundnoise.shtml. There is a guide on sound mapping, and this could be done as part of an interdisciplinary project using tools for sound measurement. Student experiments measuring the sound levels in their own school and community can be an enlightening exercise for both students and the adults. For example, a group of 5th grade students in Alexandria, Virginia, discovered that “the decibel level in the cafeteria could reach an average of 101 decibels, equivalent to the noise in a subway station” (NIDCD, 2016). &#xA;&#xA;If we want our students to become civically engaged citizens, advocating for their own needs and the needs of others, then the acoustics in their own classrooms would be a wonderful place to start.&#xA;&#xA;Amplification&#xA;&#xA;Amplification systems could be of some benefit to both teachers and students, but if an amplified voice just reverberates around the classroom walls, it may lend itself to the creation of more noise. Amplification systems also tend to amplify only the teacher, not students, and asking students to pass around a microphone to speak is not an ideal workaround for every group discussion (Seep et al., 2000). Designing classroom walls and surfaces to dampen noise may be the wiser investment.&#xA;&#xA;Policy&#xA;&#xA;The best way to solve problems with acoustics is to prevent them beforehand, not correct them after the fact (Seep et al., 2000). And there is evidence that regulations of new school construction can have a positive impact on the acoustical quality of classrooms. In England and Wales, legislation introduced in 2003 required new school buildings to meet specifications for noise, reverberation time, and acoustical treatments. A study in 2015 that measured 185 schools across a range of representative secondary UK schools found that the amount of spaces that met the requirements doubled in those built after the regulations, compared to those built before (Shield et al.).&#xA;&#xA;So regulations matter. Unfortunately, many schools are constructed with cost as the most important factor, and acoustics can be all too easily overlooked, despite their centrality to learning. Regulations that provide clear specifications for acoustical design would help to ensure that built spaces provide an environment where speech can be heard.&#xA;&#xA; The good news is that the U.S. has a set of rigorous acoustical standards that could guide school design. In 2002, the Acoustical Society of America and American National Standards Institute published the American National Standard Acoustical Performance Criteria, Design Requirements, and Guidelines for Schools, which were further revised and updated in 2010 (ANSI/ASA). For classrooms and other school spaces, they provide design specifications to address background noise, reverberation times, and more.&#xA;&#xA; The bad news is twofold. First, these standards are not compulsory. Since the federal government does not collect and publish information on school construction, it is unclear how many schools built after 2002 comply, and furthermore, how many schools built prior to 2002 may be anywhere close to these standards.&#xA;&#xA; Second, construction authorities already must adhere to a biblical amount of code, and they are not necessarily receptive to adding more costly and complex ones on top of it.&#xA;&#xA;At the time of this writing, a clearer and narrower technical standard associated with reverberation time was proposed for inclusion in the International Building Code, a well-respected code created by the International Code Council and adopted by many states and municipalities (NRMCA, 2017). For reference, this proposal would establish a scope for technical criteria in Section 808 of the ICC/A117.1 – 2017 Section 1207, “Enhanced Classroom Acoustics” (ICC, 2018). This update will apply for new school construction when and where the code is adopted in 2021. But this highlights a key caveat: a state or municipality still needs to adopt that updated building code in order for it to apply and become enforceable — so we’re still kicking the can down the road.&#xA;&#xA;What can citizens who care about this do? Instead of waiting for the updated International Building Code to be adopted, we can advocate for more rigorous acoustical guidelines to be applied now by our school construction authorities for all new school buildings.&#xA;&#xA;And we’re still only talking about new school building construction. Noise abatement and quality acoustical treatment in older buildings becomes even more costly and complex.&#xA;&#xA;Grant funding for schools is frequently targeted for items of more immediate concerns, like technology or musical instruments. But why not seek to gain capital funding for acoustical treatments for classrooms and cafeterias? The impact could not only be significant, but furthermore sustained over the course of life for the school building, impacting countless students and teachers.&#xA;&#xA;It’s also important to remember from an advocacy perspective that acoustics are an accessibility issue. If a child is hearing impaired, their Individualized Education Program (IEP) should address the acoustical environment—this is especially important the younger the child is. But a child’s IEP is only an avenue for advocacy on a case-by-case basis. Advocacy to a public office of disability and to our public representatives is a broader way to tackle the issue, in conjunction with advocacy for individual students within the school.&#xA;&#xA;In Sum&#xA;&#xA;Noise makes it harder for all children to hear, to read, and to remember.&#xA;Children in many classrooms miss 25% or more of what their teacher says each day due to poor acoustics.&#xA;When a space is of poor acoustical quality, it is more likely to become noisier once in use.&#xA;Poor acoustical quality not only impedes student learning, but furthermore creates costly teacher voice problems.&#xA;Reverberation times in classrooms should ideally be 0.4 seconds or less.&#xA;&#xA;To combat noise in schools, we can:&#xA;&#xA;Stick “floor savers” (little felt disks) on the bottoms of chair and table legs&#xA;Install or replace acoustical ceiling tiles with noise reduction coefficient (NRC) values 0.75 or higher&#xA;Install acoustical panels in cafeterias and auditoriums&#xA;Hang stuff from the ceiling, such as:&#xA;Acoustical baffles, clouds, or canopies&#xA;Banners, flags, student work, or plants&#xA;Place furniture and other large surfaces at different angles to the walls and ceiling to help diffuse sound&#xA;Support students in understanding and taking ownership of sound levels in their living and learning environment&#xA;Advocate for new school construction to adhere to ANSI or updated International Building Code guidelines for classroom acoustics&#xA;Seek funding for older school buildings to obtain professional acoustical treatment&#xA;&#xA;Extra Credit: The Ecology of Acoustics&#xA;&#xA;Schools leaders constantly scan the acoustical environment of their building. They can tell when an escalating voice may mean a fight is about to break out, versus when a class performance is about to occur. The subtle yet critical distinction between the emotional valence of kids having fun or experiencing crisis is something that becomes instinctual.&#xA;&#xA;What about the cumulative types of sounds that kids make over time? What is the positive-to-negative ratio of words used? Can we identify the frequency and patterns of positive or problematic speech? &#xA;&#xA;Soundscape ecologist Bernie Krause has recorded the sounds of natural ecosystems for nearly half a century. From his experience listening intensely, over time, to the sounds associated with specific places, he has developed a hypothesis that the health of an ecosystem can be gauged by the layered diversity of its sounds (Keim, 2014). According to this “niche hypothesis”—which he calls biophony—in a complex, diverse, and well-balanced ecosystem, each animal’s call finds its place within a tapestry of sounds, in the manner that the leaves of a thriving tree stagger themselves three dimensionally to best catch the light.&#xA;&#xA;Conversely, in a damaged ecosystem, animal sounds thin out, devolving into extremes of either noise or silence. Due to the increasing noise of human traffic and industrial activity, animals sensitive to noise may have difficulty finding a niche in which they can be heard, thus reducing their ability to procreate and thrive.&#xA;&#xA;In the ecosystem of a school, one hopes that every child’s voice will find its niche. Yet all too often, there may be more noise than signal.&#xA;&#xA;Can we gauge the health of a school ecosystem through the tapestry of its sounds?&#xA;&#xA;Is there a threshold that could be identified in the trends and types of school sounds, where we could intervene before problems occur? Could issues with school climate be identified more swiftly?&#xA;&#xA;In many cities in the U.S., scanners are programmed to automatically detect gunshots, using a technology called ShotSpotter. Once gunfire has been detected, it is submitted to police dispatchers with a GPS location in order to be investigated (Smith, 2016). Similarly, scanners are now being embedded in natural areas that are in danger of illegal logging or poaching that can alert rangers (Hausheer, 2017).&#xA;&#xA;Maybe one day automated scanners will monitor the sounds within school hallways, stairwells, and cafeterias, supplementing the instinctual sense of school leaders with acoustical data over time.&#xA;&#xA;#ecosystems #ecology #acoustics #environment #hearing #speech #language #literacy #learning #buildings #architecture #sound #noise #research #reform&#xA;&#xA;References&#xA;&#xA;Altieri, N. A., Pisoni, D. B. and Townsend, J. T. (2011) ‘Some normative data on lip-reading skills (L)’, The Journal of the Acoustical Society of America, 130(1), pp. 1–4. doi: 10.1121/1.3593376.&#xA;American Speech-Language-Hearing Association (n.d.). Classroom Acoustics (Practice Portal). Available at:  www.asha.org/Practice-Portal/Professional-Issues/Classroom-Acoustics (Accessed: 30 May 2018).&#xA;Anderson, K. (2001) ‘Noisy classrooms: What does the research really say?,’ Journal of Educational Audiology, 9, pp. 21–33.&#xA;ANSI/ASA S12.60-2010/Part 1 (2010) American National Standard Acoustical Performance Criteria, Design Requirements, and Guidelines for Schools, Part 1: Permanent Schools. Acoustical Society of America, 35 Pinelawn Road, Suite 114E, Melville, NY 11747, USA.&#xA;Betz, K. (2015) ‘Struggling To Hear, Learn, And Teach’, Commercial Architecture Magazine, 1 March. Available at: https://www.commercialarchitecturemagazine.com/struggling-to-hear-learn-and-teach/ (Accessed: Accessed: 28 May 2018).&#xA;Bronzaft, A. L. (1981) ‘The effect of a noise abatement program on reading ability’, Journal of Environmental Psychology, 1(3), pp. 215–222. doi: 10.1016/S0272-4944(81)80040-0.&#xA;Bronzaft, A. L. and McCarthy, D. P. (1975) ‘The Effect of Elevated Train Noise On Reading Ability’, Environment and Behavior, 7(4), pp. 517–528. doi: 10.1177/001391657500700406.&#xA;Cantor Cutiva, L. C. and Burdorf, A. (2015) ‘Medical Costs and Productivity Costs Related to Voice Symptoms in Colombian Teachers’, Journal of Voice: Official Journal of the Voice Foundation, 29(6), pp. 776.e15–22. doi: 10.1016/j.jvoice.2015.01.005.&#xA;Casey, J. A. et al. (2017) ‘Race/Ethnicity, Socioeconomic Status, Residential Segregation, and Spatial Variation in Noise Exposure in the Contiguous United States’, Environmental Health Perspectives, 125(7), p. 077017. doi: 10.1289/EHP898.&#xA;Casey, J. A., James, P. and Morello-Frosch, R. (2017) Urban noise pollution is worst in poor and minority neighborhoods and segregated cities, The Conversation. Available at: http://theconversation.com/urban-noise-pollution-is-worst-in-poor-and-minority-neighborhoods-and-segregated-cities-81888 (Accessed 19 May 2018).&#xA;Chepesiuk, R. (2005) ‘Decibel Hell: The Effects of Living in a Noisy World’, Environmental Health Perspectives, 113(1), pp. A34–A41.&#xA;Cohen, S., Glass, D. C. and Singer, J. E. (1973) ‘Apartment noise, auditory discrimination, and reading ability in children’, Journal of Experimental Social Psychology, 9(5), pp. 407–422. doi: 10.1016/S0022-1031(73)80005-8.&#xA;Dockrell, J. E. and Shield, B. M. (2006) ‘Acoustical barriers in classrooms: the impact of noise on performance in the classroom’, British Educational Research Journal, 32(3), pp. 509–525. doi: 10.1080/01411920600635494.&#xA;Evans, G. W. and Maxwell, L. (1997) ‘Chronic Noise Exposure and Reading Deficits: The Mediating Effects of Language Acquisition’, Environment and Behavior, 29(5), pp. 638–656. doi: 10.1177/0013916597295003.&#xA;Hausheer, J. E. (2017) Forest Soundscapes Hold the Key for Biodiversity Monitoring, Cool Green Science. Available at: https://blog.nature.org/science/2017/07/24/forest-soundscapes-hold-the-key-for-biodiversity-monitoring/ (Accessed: 14 June 2018).&#xA;Hornickel, J. and Kraus, N. (2013) ‘Unstable Representation of Sound: A Biological Marker of Dyslexia’, Journal of Neuroscience, 33(8), pp. 3500–3504. doi: 10.1523/JNEUROSCI.4205-12.2013.&#xA;Hygge, S., Evans, G., and Bullinger, M. (2000) ‘The Munich airport noise study - effects of chronic aircraft noise on children’s perception and cognition,’ Inter.noise 2000, The 29th International Congress and Exhibition on Noise Control Engineering, 27-30 August. Nice, France&#xA;International Code Council (2018). 2017 ICC A117.1-2017 Accessible and Usable Buildings and Facilities. Available at: https://codes.iccsafe.org/public/document/ICCA117_12017 (Accessed: 10 June 2018)&#xA;Keim, B. (2016) Decoding Nature’s Soundtrack, Nautilus. Available at: http://nautil.us/issue/38/noise/decoding-natures-soundtrack-rp (Accessed: 13 Jan 2018).&#xA;Klatte, M., Lachmann, T. and Meis, M. (2010) ‘Effects of noise and reverberation on speech perception and listening comprehension of children and adults in a classroom-like setting’, Noise &amp; Health, 12(49), pp. 270–282. doi: 10.4103/1463-1741.70506.&#xA;Kristiansen, J. et al. (2013) ‘Effects of Classroom Acoustics and Self-Reported Noise Exposure on Teachers’ Well-Being’, Environment and Behavior, 45(2), pp. 283–300. doi: 10.1177/0013916511429700&#xA;Mealings, K. (2016) ‘Classroom acoustic conditions: Understanding what is suitable through a review of national and international standards, recommendations, and live classroom measurements’,  Proceedings of ACOUSTICS 2016&#xA;Mehta, R., Zhu, R. (Juliet) and Cheema, A. (2012) ‘Is Noise Always Bad? Exploring the Effects of Ambient Noise on Creative Cognition’, Journal of Consumer Research, 39(4), pp. 784–799. doi: 10.1086/665048.&#xA;National Institute on Deafness and Other Communication Disorders (2016) ‘Students ask “How loud is too loud?” in the cafeteria,’ 22 July [Online]. Available at: https://www.noisyplanet.nidcd.nih.gov/have-you-heard/how-loud-is-too-loud-in-the-school-cafeteria (Accessed: 15 June 2018)&#xA;National Ready Mixed Concrete Association (n.d.)  Building Code Adoption by State. Available at: https://www.nrmca.org/Codes/downloads/Master-I-Code-Adoption-Chart-Feb-2017.pdf (Accessed: 10 June 2018) &#xA;Pichora-Fuller, M. (2007) Audition and cognition: What audiologists need to know about listening. Paper presented at the Adult Conference.&#xA;Rogerson, J. and Dodd, B. (2005) ‘Is there an effect of dysphonic teachers’ voices on children’s processing of spoken language?’, Journal of Voice: Official Journal of the Voice Foundation, 19(1), pp. 47–60. doi: 10.1016/j.jvoice.2004.02.007.&#xA;Seep, B. et al. (2000) ‘Classroom Acoustics: A Resource for Creating Environments with Desirable Listening Conditions’. Acoustical Society of America Publications.&#xA;Shield, B. et al. (2015) ‘A survey of acoustic conditions and noise levels in secondary school classrooms in England’, The Journal of the Acoustical Society of America, 137(1), pp. 177–188. doi: 10.1121/1.4904528.&#xA;Smith, R. (2016) ‘Here’s how the NYPD’s expanding ShotSpotter system works,’ DNAInfo, 18 May [Online]. Available at: http://www.dnainfo.com/new-york/20160518/crown-heights/heres-how-nypds-expanding-shotspotter-system-hears-gunfire/ (Accessed: 14 June 2018).&#xA;Stansfeld, S. A. et al. (2005) ‘Aircraft and road traffic noise and children’s cognition and health: a cross-national study’, Lancet (London, England), 365(9475), pp. 1942–1949. doi: 10.1016/S0140-6736(05)66660-3.&#xA;United States Access Board (n.d.) About the Classroom Acoustics Rulemaking. Available at: https://www.access-board.gov/guidelines-and-standards/buildings-and-sites/classroom-acoustics (Accessed: 30 May 2018).&#xA;Verdolini, K. and Ramig, L. O. (2001) ‘Review: occupational risks for voice problems’, Logopedics, Phoniatrics, Vocology, 26(1), pp. 37–46.]]&gt;</description>
      <content:encoded><![CDATA[<blockquote><p><em>With a classroom having good acoustical characteristics, learning is easier, deeper, more sustained, and less fatiguing.</em></p>

<p>—<em>The Acoustical Society of America, in its introduction to Acoustical Performance Criteria, Design Requirements, and Guidelines for Schools, Part 1: Permanent Schools</em></p></blockquote>

<p>When I first moved to NYC from California, I was taken aback by the unceasing din. In our first apartment, my wife and I were treated to an all-night alleyway party each weekend by our downstairs neighbors. In desperation, we bought a white noise machine, but this proved to be a mostly futile gesture.</p>

<p>Our second apartment was perched above a popular nightspot, which considerately recycled its beer bottles outside our bedroom window at three AM every morning. We got an additional white noise machine and put up layers of cardboard against the windows. But outside of professional acoustical treatment, there’s no hiding the intense, high decibel sound of twenty-five gallons of beer sodden glass bottles slamming repeatedly into their brethren as they are dumped into a bin.</p>



<p>So I well know how noise can impact a person’s well-being. But maybe I’m just overly sensitive. I see people everyday move along unphased as truck horns blast in their face. Humans are a highly adaptive species, after all, and we go on about our business with a cacophony of ambient noise hovering about us like smog.</p>

<p>While many of us may consider noise to be a minor nuisance (except, perhaps, when we are trying to sleep), it can have a profound impact on our health. Studies have shown that constant exposure to noise, such as living near a highway or airport, can lead not only to loss of sleep, but hypertension, higher blood pressure, Type 2 diabetes, heart problems, and lower birth weight, just to name a few long-term consequences (Casey, James, &amp; Morello-Frosch, 2017).</p>

<p>There’s a lot more we could say about the impact of sound on health, whether physical or psychological. But most pertinent to our focus are the findings on how harmful noise can be for learning, most especially in the places where learning should be what is held most sacred: our schools.</p>

<p>Given that children are far more vulnerable to noise than adults (Klatte, Lachmann, &amp; Meis, 2010), ensuring that children have a learning environment unthreatened by noise should be a commonsense goal. Yet how frequently have you heard an education reformer discuss acoustical treatment as a school or district improvement initiative? [<em>…silence... crickets… beeping horns...</em>]</p>

<p>Never? Exactly. So let’s start talking about it. In this post, we’ll examine the impact of noise on students and on teachers, then consider what we can do about it.</p>

<h1 id="the-impact-of-noise-on-student-learning" id="the-impact-of-noise-on-student-learning">The Impact of Noise on Student Learning</h1>

<h2 id="chronic-noise-what-you-hear-is-how-you-learn" id="chronic-noise-what-you-hear-is-how-you-learn">Chronic Noise: What You Hear Is How You Learn</h2>

<p>The natural sensitivity that the human ear has to auditory vibration is not a primary design consideration for most schools. Surfaces are hard. Corners are sharp. Floors are tiled. Every squeak of a sneaker, shout of a hormonally charged teen, recurring bell at the end of a period, each slam of a heavy door can be amplified, reverberating throughout the building and, especially during fire drills, deep into the marrow of one’s bones. </p>

<p>In 1975, a seminal study by Arline Bronzaft and Dennis McCarthy at a school in NYC illuminated the impact of noise on learning. At Public School 98 in uptown Manhattan, the elevated 1 train roared by a mere two hundred feet from the southeast side of the building every few minutes, disrupting learning in nearby classrooms.</p>

<p>One can imagine a teacher hovering exasperated in mid-sentence for the 6th time that period, awaiting the rumbling clatter of the train to recede into the distance, her students’ flighty attention ebbing away just as rapidly.</p>

<p><img src="https://i.snap.as/4G7DFgZQ.jpg" alt="PS 98 in Inwood, Manhattan on an overcast spring day in 2018."/></p>

<p>Four year’s worth of data show that students located in classrooms nearest the train had far lower reading scores than those on the quieter side, lagging behind by three months to as much as a year. </p>

<p>Imagine if your child were placed in a classroom where they might lose an entire year’s worth of reading ability simply due to where the classroom happened to be located in the building.</p>

<p> To their credit, these results were significant enough to spur the NYC Board of Education and MTA into immediate action (a verb phrase not commonly associated with either entity). The school installed special tiling in the classrooms facing the train, and the Metropolitan Transit Authority insulated the tracks adjacent to the school. In a follow-up study, students across all classrooms were found to have comparable reading levels as a result of these simple interventions (Bronzaft, 1981). This demonstrates that the delay in student reading ability was not due to the quality of the staff nor curriculum, but solely to the location of the classrooms in the building.</p>

<p>Later studies have further revealed the impact of noise and school acoustics. Chronic exposure to noise, such as residing near a highway or airport, impairs a child’s ability to learn. In 1973, researchers measured the reading and auditory processing abilities of children living on different floors of the Bridge Apartments, a quartet of 32 story high rises bestride one of the busiest highways in uptown Manhattan (Cohen, Glass, &amp; Singer). They found that children living on lower floors, in greater proximity to the unrelenting noise of the highway, had lower reading scores compared to those on higher floors. The longer a child lived on a lower floor, the greater the gap.</p>

<p> This is a vivid visualization of how where a child lives can either expand or inhibit their opportunities to learn—down to the floor they may happen to live on within the very same building. Noise pollution, as with pollution of other sorts, is worse in neighborhoods segregated by race or class (Casey et al., 2017). Schools serving primarily poor, Black, or Latinx communities thus tend to have greater amounts of ambient noise, which most likely means their classrooms will also be noisier—unless those spaces are constructed with materials that absorb external noise, or are belatedly given acoustical treatment (we’ll look more at how to fight noise in a minute).</p>

<p>In a series of studies of schools located near airports in New York, Munich, the Netherlands, Spain, and the UK, cognitive tasks requiring memory and language processing, such as learning a word list, were impaired by aircraft noise, as were reading comprehension scores and auditory tests of speech perception (Evan &amp; Maxwell, 1997; Hygge, Evans, &amp; Bullinger, 2000; Stansfeld et al., 2005).</p>

<p>In sum, noise makes it harder for all children to hear, to read, and to remember.</p>

<p>Now is probably a good time to highlight the strong connection between reading comprehension and auditory learning. While our visual system is clearly an important component of reading, comprehension of the written word is founded on the ability to discern the sounds that letters and words are composed of. For children that struggle with reading, especially those classified as dyslexic, their difficulty is closely related to trouble with auditory processing (Hornickel &amp; Kraus, 2013).</p>

<p>All children should learn in environments in which speech can be clearly heard, but it is especially critical for young children, children with hearing impairments, learning challenges, or learners of a new language (Kristiansen et al., 2011).</p>

<h2 id="the-importance-of-the-intelligibility-of-speech" id="the-importance-of-the-intelligibility-of-speech">The Importance of the Intelligibility of Speech</h2>

<p>The connection between noise and learning makes a lot of sense when you consider that speech and language are central to most classroom instruction. The more difficult it is to discern individual words, the more cognitive energy a brain must exert to fill in the gaps, drawing from prior knowledge and context (Pichora-Fuller, 2007). This is similar to the challenge a struggling reader faces when they expend more cognitive energy decoding words rather than deciphering meaning. And while you may be able to hold a conversation in the middle of a dance club and understand what your inebriated friend is saying, don’t expect that children can so easily fill in the blanks. Even those adults who are deaf—and are therefore experienced lip readers—do not recognize a large majority of words spoken to them without signing (Altieri, Pisoni, &amp; Townsend, 2011).</p>

<p>A 2000 report, “Classroom Acoustics: A Resource for Creating Environments with Desirable Listening Conditions,” framed the difficulty children face in understanding classroom speech thus:</p>

<blockquote><p><em>In many classrooms in the United States, the speech intelligibility rating is 75 percent or less. That means that, in speech intelligibility tests, listeners with normal hearing can understand only 75 percent of the words read from a list. Imagine reading a textbook with every fourth word missing, and being expected to understand the material and be tested on it. Sounds ridiculous? Well, that is exactly the situation facing students every day in schools all across the country. (Seep et al., 2000)</em></p></blockquote>

<p>Let me repeat that claim above again in a different way to stress this point: in many classrooms, due to poor acoustics, children may not understand 25% or more of the words their teacher speaks. It’s hard to verify a statistic like that, but there have been some surveys of acoustics across many schools in the world, and what is clear is that the acoustical quality of schools and classrooms can vary quite dramatically (Mealings, 2016).</p>

<p>Given that the majority of learning in most classrooms is based upon speech, you might conclude that acoustics would be one of the primary concerns of classroom design. You may also think that it would be one of the first things a school leader considers when evaluating the learning conditions of their classrooms. The reality is that acoustical design is rarely considered due to cost and complexity, and noise continues to be dismissed as a minor nuisance. Meanwhile, children are sitting in classrooms where they miss a substantial portion of what their teacher says each day, due to no fault nor inattention of their own.</p>

<h2 id="noise-begets-noise" id="noise-begets-noise">Noise Begets Noise</h2>

<p>But it is not only chronic, deafening noise from cars, airplanes, and trains that can impair learning. The background noise within a classroom can also be harmful. In a 2006 study, 158 eight-year-olds were randomly assigned to classrooms with three different noise conditions and administered tests: normal sound levels during testing (no talking), a constant stream of children babbling (around 65 decibels), and classroom babble with intermittent external noise events, like sirens (Dockrell &amp; Shield). One of their findings was that classroom chatter can have a detrimental effect on student performance on both verbal and nonverbal tasks. Which should surprise exactly no one who has ever tried to concentrate while others around them were gabbing.</p>

<p>In a survey of secondary schools in London, researchers measured the unoccupied levels of ambient noise and reverberation in classrooms and compared it against sound levels during lessons across multiple subjects and activities (Shield et al., 2015). They found that sound levels during instruction were related to the acoustical quality of the rooms themselves, and that disruptions to learning, such as students talking or shouting, were correlated to rooms of poorer acoustical quality. Thus, the poorer the acoustical quality of room—as measured before anyone occupies it—the noisier the room is likely to be once kids are in there. And the more likely, as a result, learning will be thrown off track.</p>

<p>This gives us a general principle: when a space is of poor acoustical quality, it is more likely to become noisier once in use. In other words, noise begets noise. And noise tends to lead to less self-regulated behavior. In a bar or a club, maybe that’s a desirable thing. But not in a school.</p>

<p>We can see this everyday in school cafeterias. Cafeterias can be some of the worst acoustical offenders, becoming deafeningly loud, which is hardly surprising given they are chock full of hungry students with pent up energy socializing in giant rectangular spaces rife with reflective surfaces. Rather than a respite after a long morning of thinking and learning, lunch breaks instead become a time of sensory overload due to poor acoustics. If it’s an option, students sensitive to noise seek to escape to the calm, restorative environment of the classroom of their favorite teacher instead.</p>

<p>All teachers dread the class that must be taught after lunch. That’s the period when kids come in buzzing with the latest scuttlebutt. In my first years of teaching in a self-contained 5th grade classroom, I learned that transitioning my students into academic learning too swiftly after lunch would result in no academic learning. It was as if they needed a break from their lunch break. Looking back, I wonder how much the frayed nerves of my students can be attributed to a lunch period spent in a noisy basement with terrible acoustics?</p>

<p>Because noise doesn’t only hinder learning. It also causes fatigue. Earlier, we discussed how when it is harder to hear, our brains must work harder to fill in the blanks. This can not only be taxing, but furthermore cause us to miss subtle cues and thus have distorted perceptions in social situations (Anderson, 2001). When you consider the trouble that adolescents already have with self-image and complex social situations, now consider the fisheye effect of cafeteria noise. It’s a disaster waiting to happen.</p>

<p>Another overlooked area of poor acoustics are school stairwells. Again, these tend to be filled with hard, reverberant surfaces that echo with the scuffle of sneakers and shouts. Some stairwells, like the one at a Bronx middle school in the picture below, carry sound across multiple floors. As groups of students traverse the stairs, noise magnifies. Students grow louder as they enter the stairwell in an effort to be heard, demonstrating the power of the signal-to-noise ratio in real-time.</p>

<p><em>Signal-to-noise ratio</em> is the audibility of what you want to hear (such as someone’s voice) against background noise. In order for speech to be intelligible, a positive <em>signal-to-noise ratio</em> must be maintained, which is why we tend to raise our voices when there is more noise around us.</p>

<p><img src="https://i.snap.as/ApWM7H19.jpg" alt="This Bronx middle school stairwell becomes a sea of noise when students use it."/></p>

<h2 id="the-impact-of-noise-on-teachers" id="the-impact-of-noise-on-teachers">The Impact of Noise on Teachers</h2>

<p>Unwittingly, teachers themselves may exacerbate the noise in their classrooms. It is natural to speak more loudly when there is noise around us. Given the prevalence of reverberating sounds in classrooms, compounded by frequent group work and the natural propensity of children to speak loudly and excitedly, teachers invariably end up talking at higher volumes in the battle to make themselves heard. Some teachers operate nearest to a level of hoarse hollering as a matter of normalcy. They may proudly term this their “teacher voice.”</p>

<p> But speaking constantly at higher volumes has consequences: teachers are more susceptible to voice-related issues. Across two studies, one in Sweden and one in the U.S., teachers were “the commonest ‘at risk’ occupation” and “four times more commonly represented clinically than in the population at large” (Williams, 2003). Another study found that poor acoustics and related voice problems reduced teacher well-being, as well as increased absences due to illness (Kristiansen et al., 2011). Teacher voice problems also may have an impact on the economy: one study estimated a cost of $2.5 billion per year to the U.S. economy due to effects such as absences, clinical visits, and medication (Verdolini &amp; Ramig, 2001), while another study of Colombian teachers estimated the cost as potentially up to 37% of a teacher’s monthly wage (Cantor Cutiva &amp; Burdorf, 2015). Even more importantly, a teacher’s vocal impairment can make it difficult for students to understand what a teacher is saying (Rogerson &amp; Dodd, 2005).</p>

<p>Another way of saying all of this is that while it may be obvious that loud, constant, chronic noise hinders learning, the poor acoustical quality of schools and classrooms can also have a cumulatively detrimental impact, both for teachers and for students.</p>

<p>This doesn’t mean that classrooms and schools should all be expected to be hushed rectories where you can hear a pin drop. A certain degree of ambient sound may even support focus, at least for tasks involving creativity (Mehta, Zhu, &amp; Cheema, 2012). I find that I can sometimes be more focused and productive when writing, for example, when I am somewhere with an ambient buzz of social activity and conversation, like at a cafe.</p>

<p>But it does mean that too much noise, whether chronic or acute, external or internal, will make learning significantly more difficult for the students who can least afford to fall behind.</p>

<p>Add to all of this the unceasing calls over a loudspeaker that can interrupt instruction throughout a school day. Well-organized schools ensure such calls are only made when absolutely necessary. At some schools, however, unscheduled announcements cause needless additional noise and constantly impede classroom learning.</p>

<p>Can we quantify the impact of interruptions from loudspeakers? This should be an area for further research. But having been interrupted in the middle of a lesson countless times myself, I can state pretty confidently such interruptions devalue learning. It’s hard enough as it is to maintain the attention of children without additional distractions.</p>

<h1 id="school-design-for-acoustics" id="school-design-for-acoustics">School Design for Acoustics</h1>

<p>What is it about a classroom or school that determines the quality of its acoustics? </p>

<p>One key factor is reverberation, which refers to the amount of time that it takes a sound within a room to fade. Too much reverberation intensifies and complicates spoken language, making it harder to understand. Imagine giving a speech in an unfurnished apartment with hardwood floors, as an example. Rather than hearing your words directly, listeners would also hear it reflecting off multiple surfaces, muddying your delivery. But some reverberation time (RT) can also be desirable, especially in a larger space, as sound needs to carry to listeners seated furthest from the speaker (ANSI/ASA, 2010). The more that reflective surfaces are covered over, RT is reduced, such as in a room with a carpet or with tapestries or pictures hanging off the walls.</p>

<p>There are recommended guidelines for how much reverberation is acceptable in a classroom environment. RT is generally measured in unoccupied classrooms through the creation of a sharp, sudden sound, such as by clapping two boards together or popping a balloon (there’s videos on the internet demonstrating measurement of different RTs in classrooms before and after acoustical treatments: search for something like ‘classroom reverberation time’). For hearing impaired children, less than 0.3 seconds of RT is recommended, while 0.4 to 0.6 seconds is recommended for general education classrooms (Mealings, 2016). In a 2001 survey by researchers from the Centers for Disease Control and Prevention, 13% of U.S. children were estimated to have hearing loss due to noise exposure (Chepesiuk, 2005). It therefore seems to me that we would want all our children, regardless of disability, to learn in classrooms with a RT closer to 0.3.</p>

<p>In surveys of school spaces across different countries, actual reverberation times varied dramatically from 0.2 to 1.9 seconds. To put this in context, a RT of 0.2 – 0.5 seconds is akin to what you would get in a recording studio, while a RT of 1.0 – 1.9 seconds would be akin to the amplified echoes of a concert hall. Reverberation time in large spaces is great for performances. But in a classroom, where both individual and group work is the norm, fluttering echoes make concentration and learning all the more difficult.</p>

<h1 id="fighting-noise-in-schools" id="fighting-noise-in-schools">Fighting Noise in Schools</h1>

<h2 id="acoustical-treatments" id="acoustical-treatments">Acoustical Treatments</h2>

<p>For schools that are already built and suffer from poor acoustics—the majority of our schools—there’s investments that can be made in acoustical treatment targeting floors, ceilings, or walls. As you consider which kind of treatment you or your school may want to invest in, bear in mind that absorptive materials work best when spread throughout a room, not concentrated in any one area (Seep et al., 2000).</p>

<h2 id="carpets-and-floors" id="carpets-and-floors">Carpets and Floors</h2>

<p>Carpets are one of the most direct methods to control sound levels. They also provide an area for class gatherings. However, carpets collect dirt and dust and need to be well-maintained, requiring more intensive upkeep than a laminated floor. And as anyone who has worked in a school knows, carpets don’t get replaced often, and rarely steam cleaned, if ever.</p>

<p>If you can’t get a carpet, reduce noise from the daily clatter of moving chairs and desks. While some teachers cut tennis balls and place them under chair and table legs, you may be inadvertently increasing indoor air pollutants. Instead, get “floor savers” (little felt disks) that adhere to chair and table bottoms. For a classroom with 32 chairs, this would cost around $75 at the time of this writing (packs of 24 for $15).</p>

<h2 id="ceilings" id="ceilings">Ceilings</h2>

<p>A better target for absorbing reverberation in classrooms is the ceiling, as absorptive ceilings can be more effective at absorbing sound than carpets (Shield et al., 2010). Some schools may already have some form of suspended acoustic ceiling tile, but those tiles may not be high performing enough to reduce reverberation times to adequate levels.</p>

<p>Swapping existing ceiling tiles with higher-performing ceiling panels can go a long way towards reducing reverberation time. Seep et al. recommend tiles of noise reduction coefficient (NRC) values 0.75 or higher (2000), while another source recommends NRC 0.9 or higher (Betz, 2015).</p>

<p>Not all spaces have suspended ceilings, however, and some ceilings are very high. Another method to absorb sound can be to hang acoustical treatments from the ceiling. These can come in various forms and colors, such as cubes, tetrahedrons, or waveforms, and are referred to by nifty names like baffles, clouds, and canopies. These decorative treatments could be well-suited for school hallways or entryways.</p>

<p>For a less expensive DYI approach, the American Speech-Language-Hearing Association recommends “suspending banners, flags, student work, and plants from the ceiling to contribute to the reduction of noise and reverberation” (ASHA, 2015). Your ability to do this will depend on the nature of your ceiling, of course.</p>

<h2 id="walls" id="walls">Walls</h2>

<p>Most classrooms have parallel walls, which means that sound reverberates between them. Even if ceilings are acoustically tiled and floors carpeted, walls can still reflect a lot of sound.</p>

<p>The American Speech-Language-Hearing Association recommends “placing mobile bulletin boards and bookcases at angles to the walls to decrease reverberation” (ASHA, 2015). Another expert also recommends hanging “portable corkboards” at an angle on the walls (Betz, 2015).</p>

<p>Portable corkboards and mobile bulletin boards can be pricey for an individual teacher to purchase, but the basic idea here is that any furniture, like bookcases, with a large surface area can be angled to help diffuse sound and keep it from reverberating back and forth between the walls.</p>

<p>It’s not clear how absorbent cork in general is for sound, but it seems to be a material that could work well as a DIY panel. Collect enough wine corks, and you can make your own corkboard! I don’t know how well these would work as sound absorbers (more research, please!), but might be worth a shot if you can’t afford professional paneling. Other low cost options may be furniture stuffing or denim.</p>

<p>Classroom furnishings in general can help to dull sound, but for larger spaces like science labs, cafeterias, or auditoriums, strategic placement of acoustical wall panels may be necessary. Acoustical panels are made of a foam or fabric that can absorb and diffuse sound.</p>

<h2 id="teacher-and-student-actions" id="teacher-and-student-actions">Teacher and Student Actions</h2>

<p>The simplest and most direct approach is to seat students who have the most trouble hearing the closest to the teacher (Klatte, M., Lachmann, T., &amp; Meis, 2010). Generally speaking, students that have an identified problem with hearing will have this recommendation mandated as part of an Individualized Education Program (IEP), but it’s a good rule of thumb to bring students who seem to struggle with retention or following directions closer. It may be an auditory or visual issue that can be supported with proximity.</p>

<p>Think also about how much your instruction relies on auditory learning. Supplement and reinforce auditory learning with tasks and texts (edu jargon: plan for multiple modalities). Use visuals and gestures when speaking, and ensure that directions for tasks are provided on a chart visible from the back of the room or as a handout.</p>

<p>Be aware of the signal-to-noise ratio in your room. When the background noise increases, resist the urge to speak louder. Teach your students to recognize sound levels that are most appropriate for different tasks (i.e. whispers during reading time, “4 inch voices” during partner/group talk, across-the-room projection during class discussions).</p>

<p> As I was drafting this, I facilitated a professional learning session for teachers in a classroom on the historic DeWitt Clinton campus in the Bronx, and because all of this research was fresh on my mind, I was hyper-aware of the terrible acoustics of the room. The ceilings were high and did not have any acoustical tile. All it took was one pair of teachers having a side conversation to raise the overall background noise of the classroom.</p>

<p> When I found myself raising my voice, I spoke to my participants about the acoustical quality of the room to make us all aware of it, and I noticed that framing my request for one voice at a time in this way was also more effective. It depersonalized the refocusing needed during discussions or work time. Building a similar awareness with students in our classrooms can support collective ownership of sound levels in the learning environment, rather than making it incumbent on the teacher to constantly monitor and shush students.</p>

<p>In fact, why not teach students directly about the importance of the impact of noise in the places they learn and live? The NYC Department of Environmental Protection has free resources to teach students about sound and noise available on its website: <a href="http://www.nyc.gov/html/dep/html/environmental_education/sound_noise.shtml">http://www.nyc.gov/html/dep/html/environmental_education/sound_noise.shtml</a>. There is a guide on sound mapping, and this could be done as part of an interdisciplinary project using tools for sound measurement. Student experiments measuring the sound levels in their own school and community can be an enlightening exercise for both students and the adults. For example, a group of 5th grade students in Alexandria, Virginia, discovered that “the decibel level in the cafeteria could reach an average of 101 decibels, equivalent to the noise in a subway station” (NIDCD, 2016). </p>

<p>If we want our students to become civically engaged citizens, advocating for their own needs and the needs of others, then the acoustics in their own classrooms would be a wonderful place to start.</p>

<h2 id="amplification" id="amplification">Amplification</h2>

<p>Amplification systems could be of some benefit to both teachers and students, but if an amplified voice just reverberates around the classroom walls, it may lend itself to the creation of more noise. Amplification systems also tend to amplify only the teacher, not students, and asking students to pass around a microphone to speak is not an ideal workaround for every group discussion (Seep et al., 2000). Designing classroom walls and surfaces to dampen noise may be the wiser investment.</p>

<h2 id="policy" id="policy">Policy</h2>

<p>The best way to solve problems with acoustics is to prevent them beforehand, not correct them after the fact (Seep et al., 2000). And there is evidence that regulations of new school construction can have a positive impact on the acoustical quality of classrooms. In England and Wales, legislation introduced in 2003 required new school buildings to meet specifications for noise, reverberation time, and acoustical treatments. A study in 2015 that measured 185 schools across a range of representative secondary UK schools found that the amount of spaces that met the requirements doubled in those built after the regulations, compared to those built before (Shield et al.).</p>

<p>So regulations matter. Unfortunately, many schools are constructed with cost as the most important factor, and acoustics can be all too easily overlooked, despite their centrality to learning. Regulations that provide clear specifications for acoustical design would help to ensure that built spaces provide an environment where speech can be heard.</p>

<p> The good news is that the U.S. has a set of rigorous acoustical standards that could guide school design. In 2002, the Acoustical Society of America and American National Standards Institute published the American National Standard Acoustical Performance Criteria, Design Requirements, and Guidelines for Schools, which were further revised and updated in 2010 (ANSI/ASA). For classrooms and other school spaces, they provide design specifications to address background noise, reverberation times, and more.</p>

<p> The bad news is twofold. First, these standards are not compulsory. Since the federal government does not collect and publish information on school construction, it is unclear how many schools built after 2002 comply, and furthermore, how many schools built prior to 2002 may be anywhere close to these standards.</p>

<p> Second, construction authorities already must adhere to a biblical amount of code, and they are not necessarily receptive to adding more costly and complex ones on top of it.</p>

<p>At the time of this writing, a clearer and narrower technical standard associated with reverberation time was proposed for inclusion in the International Building Code, a well-respected code created by the International Code Council and adopted by many states and municipalities (NRMCA, 2017). For reference, this proposal would establish a scope for technical criteria in Section 808 of the ICC/A117.1 – 2017 Section 1207, “Enhanced Classroom Acoustics” (ICC, 2018). This update will apply for new school construction when and where the code is adopted in 2021. But this highlights a key caveat: a state or municipality still needs to adopt that updated building code in order for it to apply and become enforceable — so we’re still kicking the can down the road.</p>

<p>What can citizens who care about this do? Instead of waiting for the updated International Building Code to be adopted, we can advocate for more rigorous acoustical guidelines to be applied now by our school construction authorities for all new school buildings.</p>

<p>And we’re still only talking about new school building construction. Noise abatement and quality acoustical treatment in older buildings becomes even more costly and complex.</p>

<p>Grant funding for schools is frequently targeted for items of more immediate concerns, like technology or musical instruments. But why not seek to gain capital funding for acoustical treatments for classrooms and cafeterias? The impact could not only be significant, but furthermore sustained over the course of life for the school building, impacting countless students and teachers.</p>

<p>It’s also important to remember from an advocacy perspective that acoustics are an accessibility issue. If a child is hearing impaired, their Individualized Education Program (IEP) should address the acoustical environment—this is especially important the younger the child is. But a child’s IEP is only an avenue for advocacy on a case-by-case basis. Advocacy to a public office of disability and to our public representatives is a broader way to tackle the issue, in conjunction with advocacy for individual students within the school.</p>

<h3 id="in-sum" id="in-sum">In Sum</h3>
<ul><li>Noise makes it harder for all children to hear, to read, and to remember.</li>
<li>Children in many classrooms miss 25% or more of what their teacher says each day due to poor acoustics.</li>
<li>When a space is of poor acoustical quality, it is more likely to become noisier once in use.</li>
<li>Poor acoustical quality not only impedes student learning, but furthermore creates costly teacher voice problems.</li>
<li>Reverberation times in classrooms should ideally be 0.4 seconds or less.</li></ul>

<p>To combat noise in schools, we can:</p>
<ul><li>Stick “floor savers” (little felt disks) on the bottoms of chair and table legs</li>
<li>Install or replace acoustical ceiling tiles with noise reduction coefficient (NRC) values 0.75 or higher</li>
<li>Install acoustical panels in cafeterias and auditoriums</li>
<li>Hang stuff from the ceiling, such as:</li>
<li>Acoustical baffles, clouds, or canopies</li>
<li>Banners, flags, student work, or plants</li>
<li>Place furniture and other large surfaces at different angles to the walls and ceiling to help diffuse sound</li>
<li>Support students in understanding and taking ownership of sound levels in their living and learning environment</li>
<li>Advocate for new school construction to adhere to ANSI or updated International Building Code guidelines for classroom acoustics</li>
<li>Seek funding for older school buildings to obtain professional acoustical treatment</li></ul>

<h1 id="extra-credit-the-ecology-of-acoustics" id="extra-credit-the-ecology-of-acoustics">Extra Credit: The Ecology of Acoustics</h1>

<p>Schools leaders constantly scan the acoustical environment of their building. They can tell when an escalating voice may mean a fight is about to break out, versus when a class performance is about to occur. The subtle yet critical distinction between the emotional valence of kids having fun or experiencing crisis is something that becomes instinctual.</p>

<p>What about the cumulative types of sounds that kids make over time? What is the positive-to-negative ratio of words used? Can we identify the frequency and patterns of positive or problematic speech? </p>

<p>Soundscape ecologist Bernie Krause has recorded the sounds of natural ecosystems for nearly half a century. From his experience listening intensely, over time, to the sounds associated with specific places, he has developed a hypothesis that the health of an ecosystem can be gauged by the layered diversity of its sounds (Keim, 2014). According to this “niche hypothesis”—which he calls biophony—in a complex, diverse, and well-balanced ecosystem, each animal’s call finds its place within a tapestry of sounds, in the manner that the leaves of a thriving tree stagger themselves three dimensionally to best catch the light.</p>

<p>Conversely, in a damaged ecosystem, animal sounds thin out, devolving into extremes of either noise or silence. Due to the increasing noise of human traffic and industrial activity, animals sensitive to noise may have difficulty finding a niche in which they can be heard, thus reducing their ability to procreate and thrive.</p>

<p>In the ecosystem of a school, one hopes that every child’s voice will find its niche. Yet all too often, there may be more noise than signal.</p>

<p>Can we gauge the health of a school ecosystem through the tapestry of its sounds?</p>

<p>Is there a threshold that could be identified in the trends and types of school sounds, where we could intervene before problems occur? Could issues with school climate be identified more swiftly?</p>

<p>In many cities in the U.S., scanners are programmed to automatically detect gunshots, using a technology called ShotSpotter. Once gunfire has been detected, it is submitted to police dispatchers with a GPS location in order to be investigated (Smith, 2016). Similarly, scanners are now being embedded in natural areas that are in danger of illegal logging or poaching that can alert rangers (Hausheer, 2017).</p>

<p>Maybe one day automated scanners will monitor the sounds within school hallways, stairwells, and cafeterias, supplementing the instinctual sense of school leaders with acoustical data over time.</p>

<p><a href="https://languageandliteracy.blog/tag:ecosystems" class="hashtag"><span>#</span><span class="p-category">ecosystems</span></a> <a href="https://languageandliteracy.blog/tag:ecology" class="hashtag"><span>#</span><span class="p-category">ecology</span></a> <a href="https://languageandliteracy.blog/tag:acoustics" class="hashtag"><span>#</span><span class="p-category">acoustics</span></a> <a href="https://languageandliteracy.blog/tag:environment" class="hashtag"><span>#</span><span class="p-category">environment</span></a> <a href="https://languageandliteracy.blog/tag:hearing" class="hashtag"><span>#</span><span class="p-category">hearing</span></a> <a href="https://languageandliteracy.blog/tag:speech" class="hashtag"><span>#</span><span class="p-category">speech</span></a> <a href="https://languageandliteracy.blog/tag:language" class="hashtag"><span>#</span><span class="p-category">language</span></a> <a href="https://languageandliteracy.blog/tag:literacy" class="hashtag"><span>#</span><span class="p-category">literacy</span></a> <a href="https://languageandliteracy.blog/tag:learning" class="hashtag"><span>#</span><span class="p-category">learning</span></a> <a href="https://languageandliteracy.blog/tag:buildings" class="hashtag"><span>#</span><span class="p-category">buildings</span></a> <a href="https://languageandliteracy.blog/tag:architecture" class="hashtag"><span>#</span><span class="p-category">architecture</span></a> <a href="https://languageandliteracy.blog/tag:sound" class="hashtag"><span>#</span><span class="p-category">sound</span></a> <a href="https://languageandliteracy.blog/tag:noise" class="hashtag"><span>#</span><span class="p-category">noise</span></a> <a href="https://languageandliteracy.blog/tag:research" class="hashtag"><span>#</span><span class="p-category">research</span></a> <a href="https://languageandliteracy.blog/tag:reform" class="hashtag"><span>#</span><span class="p-category">reform</span></a></p>

<h2 id="references" id="references">References</h2>
<ul><li>Altieri, N. A., Pisoni, D. B. and Townsend, J. T. (2011) ‘Some normative data on lip-reading skills (L)’, The Journal of the Acoustical Society of America, 130(1), pp. 1–4. doi: 10.1121/1.3593376.</li>
<li>American Speech-Language-Hearing Association (n.d.). Classroom Acoustics (Practice Portal). Available at:  www.asha.org/Practice-Portal/Professional-Issues/Classroom-Acoustics (Accessed: 30 May 2018).</li>
<li>Anderson, K. (2001) ‘Noisy classrooms: What does the research really say?,’ Journal of Educational Audiology, 9, pp. 21–33.</li>
<li>ANSI/ASA S12.60-2010/Part 1 (2010) American National Standard Acoustical Performance Criteria, Design Requirements, and Guidelines for Schools, Part 1: Permanent Schools. Acoustical Society of America, 35 Pinelawn Road, Suite 114E, Melville, NY 11747, USA.</li>
<li>Betz, K. (2015) ‘Struggling To Hear, Learn, And Teach’, Commercial Architecture Magazine, 1 March. Available at: <a href="https://www.commercialarchitecturemagazine.com/struggling-to-hear-learn-and-teach/">https://www.commercialarchitecturemagazine.com/struggling-to-hear-learn-and-teach/</a> (Accessed: Accessed: 28 May 2018).</li>
<li>Bronzaft, A. L. (1981) ‘The effect of a noise abatement program on reading ability’, Journal of Environmental Psychology, 1(3), pp. 215–222. doi: 10.1016/S0272-4944(81)80040-0.</li>
<li>Bronzaft, A. L. and McCarthy, D. P. (1975) ‘The Effect of Elevated Train Noise On Reading Ability’, Environment and Behavior, 7(4), pp. 517–528. doi: 10.1177/001391657500700406.</li>
<li>Cantor Cutiva, L. C. and Burdorf, A. (2015) ‘Medical Costs and Productivity Costs Related to Voice Symptoms in Colombian Teachers’, Journal of Voice: Official Journal of the Voice Foundation, 29(6), pp. 776.e15–22. doi: 10.1016/j.jvoice.2015.01.005.</li>
<li>Casey, J. A. et al. (2017) ‘Race/Ethnicity, Socioeconomic Status, Residential Segregation, and Spatial Variation in Noise Exposure in the Contiguous United States’, Environmental Health Perspectives, 125(7), p. 077017. doi: 10.1289/EHP898.</li>
<li>Casey, J. A., James, P. and Morello-Frosch, R. (2017) Urban noise pollution is worst in poor and minority neighborhoods and segregated cities, The Conversation. Available at: <a href="http://theconversation.com/urban-noise-pollution-is-worst-in-poor-and-minority-neighborhoods-and-segregated-cities-81888">http://theconversation.com/urban-noise-pollution-is-worst-in-poor-and-minority-neighborhoods-and-segregated-cities-81888</a> (Accessed 19 May 2018).</li>
<li>Chepesiuk, R. (2005) ‘Decibel Hell: The Effects of Living in a Noisy World’, Environmental Health Perspectives, 113(1), pp. A34–A41.</li>
<li>Cohen, S., Glass, D. C. and Singer, J. E. (1973) ‘Apartment noise, auditory discrimination, and reading ability in children’, Journal of Experimental Social Psychology, 9(5), pp. 407–422. doi: 10.1016/S0022-1031(73)80005-8.</li>
<li>Dockrell, J. E. and Shield, B. M. (2006) ‘Acoustical barriers in classrooms: the impact of noise on performance in the classroom’, British Educational Research Journal, 32(3), pp. 509–525. doi: 10.1080/01411920600635494.</li>
<li>Evans, G. W. and Maxwell, L. (1997) ‘Chronic Noise Exposure and Reading Deficits: The Mediating Effects of Language Acquisition’, Environment and Behavior, 29(5), pp. 638–656. doi: 10.1177/0013916597295003.</li>
<li>Hausheer, J. E. (2017) Forest Soundscapes Hold the Key for Biodiversity Monitoring, Cool Green Science. Available at: <a href="https://blog.nature.org/science/2017/07/24/forest-soundscapes-hold-the-key-for-biodiversity-monitoring/">https://blog.nature.org/science/2017/07/24/forest-soundscapes-hold-the-key-for-biodiversity-monitoring/</a> (Accessed: 14 June 2018).</li>
<li>Hornickel, J. and Kraus, N. (2013) ‘Unstable Representation of Sound: A Biological Marker of Dyslexia’, Journal of Neuroscience, 33(8), pp. 3500–3504. doi: 10.1523/JNEUROSCI.4205-12.2013.</li>
<li>Hygge, S., Evans, G., and Bullinger, M. (2000) ‘The Munich airport noise study – effects of chronic aircraft noise on children’s perception and cognition,’ Inter.noise 2000, The 29th International Congress and Exhibition on Noise Control Engineering, 27-30 August. Nice, France</li>
<li>International Code Council (2018). 2017 ICC A117.1-2017 Accessible and Usable Buildings and Facilities. Available at: <a href="https://codes.iccsafe.org/public/document/ICCA117_12017">https://codes.iccsafe.org/public/document/ICCA117_12017</a> (Accessed: 10 June 2018)</li>
<li>Keim, B. (2016) Decoding Nature’s Soundtrack, Nautilus. Available at: <a href="http://nautil.us/issue/38/noise/decoding-natures-soundtrack-rp">http://nautil.us/issue/38/noise/decoding-natures-soundtrack-rp</a> (Accessed: 13 Jan 2018).</li>
<li>Klatte, M., Lachmann, T. and Meis, M. (2010) ‘Effects of noise and reverberation on speech perception and listening comprehension of children and adults in a classroom-like setting’, Noise &amp; Health, 12(49), pp. 270–282. doi: 10.4103/1463-1741.70506.</li>
<li>Kristiansen, J. et al. (2013) ‘Effects of Classroom Acoustics and Self-Reported Noise Exposure on Teachers’ Well-Being’, Environment and Behavior, 45(2), pp. 283–300. doi: 10.1177/0013916511429700</li>
<li>Mealings, K. (2016) ‘Classroom acoustic conditions: Understanding what is suitable through a review of national and international standards, recommendations, and live classroom measurements’,  Proceedings of ACOUSTICS 2016</li>
<li>Mehta, R., Zhu, R. (Juliet) and Cheema, A. (2012) ‘Is Noise Always Bad? Exploring the Effects of Ambient Noise on Creative Cognition’, Journal of Consumer Research, 39(4), pp. 784–799. doi: 10.1086/665048.</li>
<li>National Institute on Deafness and Other Communication Disorders (2016) ‘Students ask “How loud is too loud?” in the cafeteria,’ 22 July [Online]. Available at: <a href="https://www.noisyplanet.nidcd.nih.gov/have-you-heard/how-loud-is-too-loud-in-the-school-cafeteria">https://www.noisyplanet.nidcd.nih.gov/have-you-heard/how-loud-is-too-loud-in-the-school-cafeteria</a> (Accessed: 15 June 2018)</li>
<li>National Ready Mixed Concrete Association (n.d.)  Building Code Adoption by State. Available at: <a href="https://www.nrmca.org/Codes/downloads/Master-I-Code-Adoption-Chart-Feb-2017.pdf">https://www.nrmca.org/Codes/downloads/Master-I-Code-Adoption-Chart-Feb-2017.pdf</a> (Accessed: 10 June 2018) </li>
<li>Pichora-Fuller, M. (2007) Audition and cognition: What audiologists need to know about listening. Paper presented at the Adult Conference.</li>
<li>Rogerson, J. and Dodd, B. (2005) ‘Is there an effect of dysphonic teachers’ voices on children’s processing of spoken language?’, Journal of Voice: Official Journal of the Voice Foundation, 19(1), pp. 47–60. doi: 10.1016/j.jvoice.2004.02.007.</li>
<li>Seep, B. et al. (2000) ‘Classroom Acoustics: A Resource for Creating Environments with Desirable Listening Conditions’. Acoustical Society of America Publications.</li>
<li>Shield, B. et al. (2015) ‘A survey of acoustic conditions and noise levels in secondary school classrooms in England’, The Journal of the Acoustical Society of America, 137(1), pp. 177–188. doi: 10.1121/1.4904528.</li>
<li>Smith, R. (2016) ‘Here’s how the NYPD’s expanding ShotSpotter system works,’ DNAInfo, 18 May [Online]. Available at: <a href="http://www.dnainfo.com/new-york/20160518/crown-heights/heres-how-nypds-expanding-shotspotter-system-hears-gunfire/">http://www.dnainfo.com/new-york/20160518/crown-heights/heres-how-nypds-expanding-shotspotter-system-hears-gunfire/</a> (Accessed: 14 June 2018).</li>
<li>Stansfeld, S. A. et al. (2005) ‘Aircraft and road traffic noise and children’s cognition and health: a cross-national study’, Lancet (London, England), 365(9475), pp. 1942–1949. doi: 10.1016/S0140-6736(05)66660-3.</li>
<li>United States Access Board (n.d.) About the Classroom Acoustics Rulemaking. Available at: <a href="https://www.access-board.gov/guidelines-and-standards/buildings-and-sites/classroom-acoustics">https://www.access-board.gov/guidelines-and-standards/buildings-and-sites/classroom-acoustics</a> (Accessed: 30 May 2018).
Verdolini, K. and Ramig, L. O. (2001) ‘Review: occupational risks for voice problems’, Logopedics, Phoniatrics, Vocology, 26(1), pp. 37–46.</li></ul>
]]></content:encoded>
      <guid>https://languageandliteracy.blog/the-influence-of-acoustics-on-learning</guid>
      <pubDate>Sun, 07 Jun 2020 03:20:10 +0000</pubDate>
    </item>
  </channel>
</rss>