<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>brains &amp;mdash; Language &amp; Literacy</title>
    <link>https://languageandliteracy.blog/tag:brains</link>
    <description>Musings about language and literacy and learning</description>
    <pubDate>Tue, 28 Apr 2026 15:44:22 +0000</pubDate>
    
    <item>
      <title>Scaling Our Capacity for Processing Information</title>
      <link>https://languageandliteracy.blog/scaling-our-capacity-for-processing-information?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[The Octopus&#xA;&#xA;  “Over cultural evolution, the human species was so pressured for increased information capacity that they invented writing, a revolutionary leap forward in the development of our species that enables information capacity to be externalized, frees up internal processing and affords the development of more complex concepts. In other words, writing enabled humans to think more abstractly and logically by increasing information capacity. Today, humans have gone to even greater lengths: the Internet, computers and smartphones are testaments to the substantial pressure humans currently face — and probably faced in the past — to increase information capacity.”&#xA;&#xA;  --Uniquely human intelligence arose from expanded information capacity, Jessica Cantlon &amp; Steven Piantadosi&#xA;&#xA;According to the perspectives of the authors in the paper quoted above, the capacity to process and manage vast quantities of information is a defining characteristic of human intelligence. This ability has been extended over time through the development of tools and techniques for externalizing information, such as via language, writing, and digital technology. These advancements have, in turn, allowed for increasingly abstract and complex thought and technologies.&#xA;&#xA;The paper by Jessica Cantlon &amp; Steven Piantadosi further proposes that the power of scaling is what lies behind human intelligence, and that this power of scaling is what further lies behind the remarkable results achieved by artificial neural networks in areas such as speech recognition, LLMs, and computer vision, and that these accomplishments have not been achieved through specialized representations and domain-specific development, but rather through the use of simpler techniques combined with increased computational power and data capacity.&#xA;!--more--&#xA;&#xA;I think the authors may be overselling scaling as the main factor behind intelligence, but scale most definitely plays a leading role alongside brain and neural network architecture and specialized data, and it most definitely plays a role in how human language is used and developed.&#xA;&#xA;The Potential of Scale&#xA;&#xA;  &#34;LLMs give us a very effective way of accessing information from other humans.”&#xA;&#xA;  –Alison Gopnik in an interview with Julien Crockett in the Los Angeles Review of Books&#xA;&#xA;In our previous explorations of language, cognition, and Large Language Models (LLMs), the recurring theme of the power of scale has certainly emerged.&#xA;&#xA;We&#39;ve delved into the statistical nature of language, where the vast interconnectedness of word combinations and their contextual relationships drive LLMs&#39; generative abilities. We&#39;ve pondered the inherent imprecision of human language and the journey towards computational precision in LLMs. And throughout, the concept of scale has remained central – the scale of data, the scale of computation, and the scale of language itself.&#xA;&#xA;It&#39;s intriguing to consider the possibility, as this paper suggests, that the capacity to process increasing amounts of information may have been a key factor in the development of human intelligence. This idea extends to how, as a species, we have continually sought ways to expand our ability to store and access information, from the invention of writing to the development of computers, the internet, and smartphones.&#xA;&#xA;This suggests that the most exciting potential of artificial neural networks such as LLMs may lie not only in their ability to respond to and generate human language, but furthermore in their ability to help us to process and manage vast quantities of information, and thus further extend our cognitive capabilities. When framed in this manner, it shifts the debate from whether LLMs already demonstrate human intelligence and whether they will soon achieve superhuman intelligence, to whether LLMs will indeed equip us with superhuman abilities. And – as always with advancements in powerful technologies – the question is who among us will gain the most from those abilities and whether the new tools will further increase or diminish disparities between groups (i.e. “the future is already here — it&#39;s just not very evenly distributed”).&#xA;&#xA;So we’ve explored a few implications of LLMs relating to language and literacy development so far, then: 1) LLMs gain the base for their uncanny powers from the statistical nature of language itself; 2) LLMs present us with an opportunity for further convergence between human and machine language; and 3) LLMs present us with an opportunity to further extend our cognitive abilities by allowing us to process far more information.&#xA;&#xA;The Dark Side of Scale&#xA;&#xA;All of this said, there is a dark side to scale, as Geoffrey West elucidates in his book, Scale (more on this on my other blog, Schools &amp; Ecosystems), which is that as we consume far more energy and create far more waste beyond our biological needs and functions than any other creature on earth as we continue to scale our technologies. As West describes it, we humans are energy-guzzling behemoths, using thirty times more energy than nature intended for creatures our size. Our outsized energy footprint makes our 7.3 billion population act as if it were in excess of 200 billion people. And we are hitting the upper limits on ecological constraints of the earth as we do so.&#xA;&#xA;Similarly, as LLMs extend our capabilities, they consume ever more power as they consume and produce ever more data. So at the very same time that our earth is rapidly accelerating towards critical thresholds of environmental change and wreaking havoc on insect, animal, soil, and plant life, we are rapidly accelerating our consumption of energy and production of waste.&#xA;&#xA;It’s hard to see a clear end in sight to this. It’s possible that the greedy demands of continuing to scale AI model training and use ends up leading to rapid development of greener technologies and accelerated efficiency in digital computation and compression. It’s just as possible that in our short-sighted endeavors we put a half-life on human civilization via no longer containable war, famine, disaster, and disease.&#xA;&#xA;Not to end this post on such a sour note, but it is important to bear a healthy skepticism about a new technology and its attendant powers, even as we seek to gain from it. And from what I see in the discourse, it seems to me that there has been a pretty healthy mix of boosterism and critique and excitement and paranoia about it all, so I’m enjoying the ride, nonetheless.&#xA;&#xA;#cognition #language #AI #LLMs #technology #brains #scale&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/Qfn012Nj.jpg" alt="The Octopus"/></p>

<blockquote><p>“Over cultural evolution, the human species was so pressured for increased information capacity that they invented writing, a revolutionary leap forward in the development of our species that enables information capacity to be externalized, frees up internal processing and affords the development of more complex concepts. In other words, writing enabled humans to think more abstractly and logically by increasing information capacity. Today, humans have gone to even greater lengths: the Internet, computers and smartphones are testaments to the substantial pressure humans currently face — and probably faced in the past — to increase information capacity.”</p>

<p><em>—<a href="https://www.nature.com/articles/s44159-024-00283-3.epdf?sharing_token=dc9WtYt3C_FN2N5q5mmKatRgN0jAjWel9jnR3ZoTv0PIvBIKEnJUrpLA70zYn0mjSaDkgiBUb43hOoUEou9xdgynS0nAWob7QAH5X7gROQMoz5n9acglkBUa_86OzUA1B-Wg9_p5hHRLFUQ95SWsfFXtU8jHuxKnM8_fWZKCoAA%3D">Uniquely human intelligence arose from expanded information capacity</a>, Jessica Cantlon &amp; Steven Piantadosi</em></p></blockquote>

<p>According to the perspectives of the authors in the paper quoted above, the capacity to process and manage vast quantities of information is a defining characteristic of human intelligence. This ability has been extended over time through the development of tools and techniques for externalizing information, such as via language, writing, and digital technology. These advancements have, in turn, allowed for increasingly abstract and complex thought and technologies.</p>

<p><a href="https://www-nature-com.manhattan.idm.oclc.org/articles/s44159-024-00283-3.epdf?sharing_token=dc9WtYt3C_FN2N5q5mmKatRgN0jAjWel9jnR3ZoTv0PIvBIKEnJUrpLA70zYn0mjSaDkgiBUb43hOoUEou9xdgynS0nAWob7QAH5X7gROQMoz5n9acglkBUa_86OzUA1B-Wg9_p5hHRLFUQ95SWsfFXtU8jHuxKnM8_fWZKCoAA%3D">The paper</a> by Jessica Cantlon &amp; Steven Piantadosi further proposes that the power of scaling is what lies behind human intelligence, and that this power of scaling is what further lies behind the remarkable results achieved by artificial neural networks in areas such as speech recognition, LLMs, and computer vision, and that these accomplishments have not been achieved through specialized representations and domain-specific development, but rather through the use of simpler techniques combined with increased computational power and data capacity.
</p>

<p>I think the authors may be overselling scaling as the main factor behind intelligence, but scale most definitely plays a leading role alongside brain and neural network architecture and specialized data, and it most definitely plays a role in how human language is used and developed.</p>

<h2 id="the-potential-of-scale" id="the-potential-of-scale">The Potential of Scale</h2>

<blockquote><p>“LLMs give us a very effective way of accessing information from other humans.”</p>

<p><em>–Alison Gopnik <a href="https://lareviewofbooks.org/article/how-to-raise-your-artificial-intelligence-a-conversation-with-alison-gopnik-and-melanie-mitchell/?s=09">in an interview with Julien Crockett</a> in the Los Angeles Review of Books</em></p></blockquote>

<p>In our previous explorations of <a href="https://languageandliteracy.blog/language-and-llms">language, cognition, and Large Language Models (LLMs)</a>, the recurring theme of the power of scale has certainly emerged.</p>

<p>We&#39;ve delved into <a href="https://languageandliteracy.blog/the-algebra-of-language-unveiling-the-statistical-tapestry-of-form-and-meaning">the statistical nature of language</a>, where the vast interconnectedness of word combinations and their contextual relationships drive LLMs&#39; generative abilities. We&#39;ve pondered <a href="https://languageandliteracy.blog/the-pathway-of-human-language-towards-computational-precision-in-llms">the inherent imprecision of human language and the journey towards computational precision in LLMs</a>. And throughout, the concept of scale has remained central – the scale of data, the scale of computation, and the scale of language itself.</p>

<p>It&#39;s intriguing to consider the possibility, as <a href="https://www-nature-com.manhattan.idm.oclc.org/articles/s44159-024-00283-3.epdf?sharing_token=dc9WtYt3C_FN2N5q5mmKatRgN0jAjWel9jnR3ZoTv0PIvBIKEnJUrpLA70zYn0mjSaDkgiBUb43hOoUEou9xdgynS0nAWob7QAH5X7gROQMoz5n9acglkBUa_86OzUA1B-Wg9_p5hHRLFUQ95SWsfFXtU8jHuxKnM8_fWZKCoAA%3D">this paper</a> suggests, that the capacity to process increasing amounts of information may have been a key factor in the development of human intelligence. This idea extends to how, as a species, we have continually sought ways to expand our ability to store and access information, from the invention of writing to the development of computers, the internet, and smartphones.</p>

<p>This suggests that the most exciting potential of artificial neural networks such as LLMs may lie not only in their ability to respond to and generate human <em>language</em>, but furthermore in their ability to help us to process and manage vast quantities of information, and thus further extend our cognitive capabilities. When framed in this manner, it shifts the debate from whether LLMs already demonstrate human intelligence and whether they will soon achieve superhuman intelligence, to whether LLMs will indeed equip <em>us</em> with superhuman abilities. And – as always with advancements in powerful technologies – the question is <em>who</em> among us will gain the most from those abilities and whether the new tools will further increase or diminish disparities between groups (i.e. “the future is already here — it&#39;s just not very evenly distributed”).</p>

<p>So we’ve explored a few implications of LLMs relating to language and literacy development so far, then: 1) LLMs gain the base for their uncanny powers from the statistical nature of language itself; 2) LLMs present us with an opportunity for further convergence between human and machine language; and 3) LLMs present us with an opportunity to further extend our cognitive abilities by allowing us to process far more information.</p>

<h2 id="the-dark-side-of-scale" id="the-dark-side-of-scale">The Dark Side of Scale</h2>

<p>All of this said, there is a dark side to scale, as Geoffrey West elucidates in his book, <em>Scale</em> (<a href="https://schoolecosystem.wordpress.com/2024/03/17/power-law-scaling-and-schools/">more on this</a> on my other blog, <em>Schools &amp; Ecosystems</em>), which is that as we consume far more energy and create far more waste beyond our biological needs and functions than any other creature on earth as we continue to scale our technologies. As West describes it, we humans are energy-guzzling behemoths, using thirty times more energy than nature intended for creatures our size. Our outsized energy footprint makes our 7.3 billion population act as if it were in excess of 200 billion people. And we are hitting the upper limits on ecological constraints of the earth as we do so.</p>

<p>Similarly, as LLMs extend our capabilities, they consume ever more power as they consume and produce ever more data. So at the very same time that our earth is rapidly accelerating towards critical thresholds of environmental change and wreaking havoc on insect, animal, soil, and plant life, we are rapidly accelerating our consumption of energy and production of waste.</p>

<p>It’s hard to see a clear end in sight to this. It’s possible that the greedy demands of continuing to scale AI model training and use ends up leading to rapid development of greener technologies and accelerated efficiency in digital computation and compression. It’s just as possible that in our short-sighted endeavors we put a half-life on human civilization via no longer containable war, famine, disaster, and disease.</p>

<p>Not to end this post on such a sour note, but it is important to bear a healthy skepticism about a new technology and its attendant powers, even as we seek to gain from it. And from what I see in the discourse, it seems to me that there has been a pretty healthy mix of boosterism and critique and excitement and paranoia about it all, so I’m enjoying the ride, nonetheless.</p>

<p><a href="https://languageandliteracy.blog/tag:cognition" class="hashtag"><span>#</span><span class="p-category">cognition</span></a> <a href="https://languageandliteracy.blog/tag:language" class="hashtag"><span>#</span><span class="p-category">language</span></a> <a href="https://languageandliteracy.blog/tag:AI" class="hashtag"><span>#</span><span class="p-category">AI</span></a> <a href="https://languageandliteracy.blog/tag:LLMs" class="hashtag"><span>#</span><span class="p-category">LLMs</span></a> <a href="https://languageandliteracy.blog/tag:technology" class="hashtag"><span>#</span><span class="p-category">technology</span></a> <a href="https://languageandliteracy.blog/tag:brains" class="hashtag"><span>#</span><span class="p-category">brains</span></a> <a href="https://languageandliteracy.blog/tag:scale" class="hashtag"><span>#</span><span class="p-category">scale</span></a></p>
]]></content:encoded>
      <guid>https://languageandliteracy.blog/scaling-our-capacity-for-processing-information</guid>
      <pubDate>Thu, 04 Jul 2024 03:56:02 +0000</pubDate>
    </item>
    <item>
      <title>Accelerating the Inner Scaffold Across Modalities and Languages</title>
      <link>https://languageandliteracy.blog/accelerating-the-inner-scaffold-across-modalities-and-languages?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[In my last post, we landed on the idea of a nascent scaffold that we are born with in our brains, which is developed through our daily interactions with one another – and then further accelerated through the reinforcement and extension of written language use.&#xA;&#xA;Before we venture into the wilds of the possible relations between language and thought, I wanted to build on this idea of how our inner scaffolds are most fully realized through speaking, listening, reading, and writing by geeking out about the beauty and wonder of multilingualism.&#xA;!--more--&#xA;There was a beautiful study I came across recently that provides a great way to visualize this.&#xA;&#xA;spoken to written language across languages&#xA;&#xA;The researchers used functional near-infrared spectroscopy (fNIRS) to examine neural connectivity during English word processing in bilingual (Chinese-English and Spanish-English) and monolingual children.&#xA;&#xA;The study groups included children (ages 5-10 years, Grades K-4) who were English monolinguals, Chinese-English bilinguals, or Spanish-English bilinguals, all receiving English-dominant education in the US (recruited from southeast Michigan, USA).&#xA;&#xA;The researchers found that the greater proficiency a child (across all groups) had in both spoken and written language, the stronger the farthest connections across their brains were. In other words, spoken and written language exposure and use made longer distance connections across the brain, and then strengthened and reinforced those connections.&#xA;&#xA;Children who were older and more proficient in spoken and written English showed more long-distance connections within the broader language network and across the two hemispheres, suggesting that advancements in language skills are supported by more integrated neural networks. In other words, the development of short-distance connections supports more basic language functions, while long-distance integrative connections mark more advanced or efficient language processing in older and more proficient children.&#xA;&#xA;Furthermore, among bilinguals they found that the greater proficiency a child had in two languages, the greater the neural density those language networks were. In Spanish bilinguals, the network density was associated with Spanish vocabulary, whereas in Chinese bilinguals, the network density was associated with Chinese character reading. Both groups showed greater network density in English in relation to their heritage language skills (most likely due to greater time spent in instruction and use with that language).&#xA;&#xA;These findings suggest that language development is supported by both short and long distance connectivity in a child’s brain. Moreover, long-distance connections are likely critical in integrating different and more complex aspects of language processes such as phonological and morpho-semantic analyses.&#xA;&#xA;What a wonderful visualization of how our inner scaffolds – the nascent neural networks in our brains – are developed by language and literacy! The more we use language across oral (or signed) and written modalities, the more we refine those networks across our brains. And the more languages we speak (or sign) and write, the more we further strengthen those networks based on the unique features of those languages.&#xA;&#xA;We see this with students who are in dual language programs for multiple years – they begin to outperform their monolingual peers. We see with students who are former English language learners (ELLs) who achieve English language proficiency – after achieving proficiency, they begin to outperform their monolingual peers.&#xA;&#xA;So not only do we want to provide our children with daily textual feasts – but furthermore, with linguistic knowledge-building feasts.&#xA;&#xA;#language #research #neuroscience #brains #literacy #reading #writing #multilingualism #bilinguals]]&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://languageandliteracy.blog/the-inner-scaffold-for-language-and-literacy">In my last post</a>, we landed on the idea of a nascent scaffold that we are born with in our brains, which is developed through our daily interactions with one another – and then further accelerated through the reinforcement and extension of written language use.</p>

<p>Before we venture into the wilds of the possible relations between language and thought, I wanted to build on this idea of how our inner scaffolds are most fully realized through speaking, listening, reading, and writing by geeking out about the beauty and wonder of multilingualism.

There was <a href="https://direct.mit.edu/nol/article/doi/10.1162/nol_a_00092/113801/Bilingual-proficiency-enhances-neural-network">a beautiful study</a> I came across recently that provides a great way to visualize this.</p>

<p><img src="https://i.snap.as/XgBFKzcU.png" alt="spoken to written language across languages"/></p>

<p>The researchers used functional near-infrared spectroscopy (fNIRS) to examine neural connectivity during English word processing in bilingual (Chinese-English and Spanish-English) and monolingual children.</p>

<p>The study groups included children (ages 5-10 years, Grades K-4) who were English monolinguals, Chinese-English bilinguals, or Spanish-English bilinguals, all receiving English-dominant education in the US (recruited from southeast Michigan, USA).</p>

<p>The researchers found that the greater proficiency a child (across all groups) had in both spoken and written language, the stronger the farthest connections across their brains were. In other words, spoken and written language exposure and use made longer distance connections across the brain, and then strengthened and reinforced those connections.</p>

<p>Children who were older and more proficient in spoken and written English showed more long-distance connections within the broader language network and across the two hemispheres, suggesting that advancements in language skills are supported by more integrated neural networks. In other words, the development of short-distance connections supports more basic language functions, while long-distance integrative connections mark more advanced or efficient language processing in older and more proficient children.</p>

<p>Furthermore, among bilinguals they found that the greater proficiency a child had in two languages, the greater the neural density those language networks were. In Spanish bilinguals, the network density was associated with Spanish vocabulary, whereas in Chinese bilinguals, the network density was associated with Chinese character reading. Both groups showed greater network density in English in relation to their heritage language skills (most likely due to greater time spent in instruction and use with that language).</p>

<p><a href="https://direct.mit.edu/nol/article/doi/10.1162/nol_a_00092/113801/Bilingual-proficiency-enhances-neural-network">These findings</a> suggest that language development is supported by both short and long distance connectivity in a child’s brain. Moreover, long-distance connections are likely critical in integrating different and more complex aspects of language processes such as phonological and morpho-semantic analyses.</p>

<p>What a wonderful visualization of how our inner scaffolds – the nascent neural networks in our brains – are developed by language and literacy! The more we use language across oral (or signed) and written modalities, the more we refine those networks across our brains. And the more languages we speak (or sign) and write, the more we further strengthen those networks based on the unique features of those languages.</p>

<p>We see this with students who are in dual language programs for multiple years – they begin to outperform their monolingual peers. We see with students who are former English language learners (ELLs) who achieve English language proficiency – after achieving proficiency, they begin to outperform their monolingual peers.</p>

<p>So not only do we want to provide our children with daily <a href="https://languageandliteracy.blog/provide-our-students-with-textual-feasts">textual feasts</a> – but furthermore, with <em>linguistic</em> knowledge-building feasts.</p>

<p><a href="https://languageandliteracy.blog/tag:language" class="hashtag"><span>#</span><span class="p-category">language</span></a> <a href="https://languageandliteracy.blog/tag:research" class="hashtag"><span>#</span><span class="p-category">research</span></a> <a href="https://languageandliteracy.blog/tag:neuroscience" class="hashtag"><span>#</span><span class="p-category">neuroscience</span></a> <a href="https://languageandliteracy.blog/tag:brains" class="hashtag"><span>#</span><span class="p-category">brains</span></a> <a href="https://languageandliteracy.blog/tag:literacy" class="hashtag"><span>#</span><span class="p-category">literacy</span></a> <a href="https://languageandliteracy.blog/tag:reading" class="hashtag"><span>#</span><span class="p-category">reading</span></a> <a href="https://languageandliteracy.blog/tag:writing" class="hashtag"><span>#</span><span class="p-category">writing</span></a> <a href="https://languageandliteracy.blog/tag:multilingualism" class="hashtag"><span>#</span><span class="p-category">multilingualism</span></a> <a href="https://languageandliteracy.blog/tag:bilinguals" class="hashtag"><span>#</span><span class="p-category">bilinguals</span></a></p>
]]></content:encoded>
      <guid>https://languageandliteracy.blog/accelerating-the-inner-scaffold-across-modalities-and-languages</guid>
      <pubDate>Mon, 09 Oct 2023 12:03:57 +0000</pubDate>
    </item>
    <item>
      <title>Language—like reading—may not be innate</title>
      <link>https://languageandliteracy.blog/language-like-reading-may-not-be-innate?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Colors of the mind&#xA;Language is a uniquely human phenomenon that develops in children with remarkable ease and fluency. Yet questions remain about how we acquire language. Is it innately wired in our brain, or do we learn all facets rapidly from birth?&#xA;&#xA;Two books – Rethinking Innateness and The Language Game – provide us with some fascinating perspectives on language learning that bears implications for how we think about learning to read and write, and furthermore, for how we talk about the power and limitations of AI.&#xA;!--more--&#xA;A Review of Where We’ve Been&#xA;&#xA;In a previous series, we pursued an interesting debate about whether learning to read is more unnatural than learning oral or signed languages. We also investigated the notion, frequently stated by “science of reading” proponents, that “our brains were not born to read,” while our brains are “hard-wired” for language.&#xA;&#xA;While I agree with researchers Gough, Hillinger, Liberman and others that written language is more complex and abstract than oral language and—hence—more difficult to acquire, I’m not convinced that calling it unnatural is most accurate. Instead, I suggest terming it effortful.&#xA;&#xA;In one of the earlier papers we examined, Liberman argued that oral language is pre-cognitive, meaning that it requires no cognition to learn and thus is more natural to acquire. He used this claim to counter the Goodmans’ assertion that oral and written language were largely synonymous, and that kids therefore could learn to read merely through exposure to literacy, rather than explicit instruction in the alphabetic principle (“whole language”). While I most definitely don’t agree with the Goodmans, I paused on Liberman’s claim with some skepticism, as there are a subset of kids who also struggle to develop speech and language skills, just as there are a subset of kids who struggle to develop reading and writing skills.&#xA;&#xA;Liberman also made another strong claim that I paused on: that the evolution of oral language is biological, while written language is cultural (which parallels arguments that language is &#34;biologically primary&#34; while reading and writing are &#34;biologically secondary,&#34; which I have also questioned, given that making the distinction is harder than it seems when social and cultural advancements are deeply interwoven with human existence over generations of time). But I mostly accepted this premise, as it seems to be self-evident that language is baked into our brains. After all, babies begin to attune to languages spoken around them even while still in the womb.&#xA;&#xA;Liberman does not stand on his own in these assertions, I should hasten to add. I just bring one of his papers up because we spent time with it here. Noam Chomsky, for example, has long argued for a universal grammar, which is taught in foundational courses on linguistics, and the related study of generative grammars is alive and well.&#xA;&#xA;Why is this important? It’s important because whether we consider language “natural” or written language “unnatural” bears implications for how we decide to teach them (or not). If we think of language as completely innate, then perhaps we don’t think it requires much of any teaching that is explicit, systematic, or diagnostic. Or conversely, if we think of written language as wholly unnatural, we may not consider how to strategically design opportunities for implicit learning, volume, and exposure.&#xA;&#xA;Yet I have just read two books, written in two different decades, that provide some really interesting critiques against the widely adopted supposition that language is innate.&#xA;&#xA;Language Models&#xA;&#xA;The first book, Rethinking Innateness: A Connectionist Perspective on Development, by Elizabeth Bates, Jeffrey Elman, Mark H. Johnson, Annette Karmiloff-Smith, Domenico Parisi and Kim Plunkett, was published in 1996, and approaches language from the lens of neuroscience, explaining connectionist models and their implications for neural development and learning. These models are not only part of the lineage of the current renaissance of Large Language Models, such as ChapGPT, but also part of a lineage of models that have informed our theoretical understanding of how children learn to read, and may continue to inform explorations of “statistical learning.”&#xA;&#xA;I was led to this book from a recommendation by Marc Joanisse, a researcher at Western University, when he commented on my tweet (are we still calling them that?) about research on artificial neural networks that suggests they can accurately model language learning in human brains.&#xA;&#xA;It was a great recommendation, and I found the book extremely relevant to ongoing conversations about AI and LLMs today, in addition to providing key insights from connectionist models into language and literacy development that challenge assumptions around innateness, such as:&#xA;&#xA;Simulations show that simple learning algorithms and architectures can enable rapid learning and sophisticated representations, such as those seen in younger infant competencies, without any innate knowledge.&#xA;U-shaped learning and discontinuous change also occur in neural networks without innate knowledge, due to architecture, input, and time spent on learning. This parallels studies of the development of linguistic abilities in children, such as the learning of past-tense and pronouns.&#xA;The way in which neural networks learn new things can be simple, yet the learning yields surprisingly complex results. This complexity emerges as the product of many simple interactions over time (this point, written in 1996, seems incredibly prescient to me as a reader in 2023 using Claude2 to distill and summarize my notes from each book for this post).&#xA;Connectionist models show global effects can emerge from local interactions rather than centralized control. Connectionist models also show how structured behaviors can emerge in neural networks through exposure to and interactions with the environment, without explicit rules or representations programmed in (which makes me think of statistical learning).&#xA;&#xA;Language Games&#xA;&#xA;The second book, The Language Game: How Improvisation Created Language and Changed the World, by Morten H. Christiansen and Nick Chater, was published last year in 2022, and focuses more on cultural evolution and social transmission of language, arguing that language is akin to a game of charades that is honed and passed on from generation to generation. I happened to check it out from the library and read it concurrently with Rethinking Innateness, and there was some great synergy between the two, especially around challenging the notion that language is innate. Some of the key points of the book:&#xA;&#xA;Language relies on and recruits existing cognitive mechanisms, becoming increasingly specialized through extensive practice and use.&#xA;Language evolves culturally to fit the human brain, not the reverse. &#xA;Language is shaped for learnability and for coordinating with other learners, not for abstract principles and rules. Children follow paths set by previous generations.&#xA;This cultural transmission across generations shapes language to be more learnable through reuse of memorable chunks (“constructions”). &#xA;Due to working memory limitations, more memorable chunks survive, causing a design without a designer. These chunks become increasingly standardized over time.&#xA;Language input must be processed immediately before it is lost (what the authors call the “Now-or-Never” bottleneck). &#xA;Chunking sounds into words and phrases buys more time to process meaning. &#xA;Gaining fluency with increasingly larger and more complex constructions of language requires extensive practice.&#xA;&#xA;Across Connectionism and Charades&#xA;&#xA;Together, these books provide a picture of language as an emergent, complex cultural and statistical phenomena that has evolved from simple learning mechanisms across generations. Rather than an innate universal grammar baked into children’s brains, language itself has adapted and molded over time to become essential to our human inheritance, as with clothing, pottery, or fire. Language emerges through social human communication and interaction. It becomes increasingly complex, yet also streamlined and standardized, without any explicit rules governing it beyond the constraints of our brains, tongues, and cognition.&#xA;&#xA;This isn’t to say there isn’t something unique about the human brain architecture in comparison to our closest animal brethren—there clearly is—but rather that language has adapted symbiotically to that architecture, like a parasite, rather than specific parts of our brain that are genetically pre-determined for language.&#xA;&#xA;Like reading, using language drives increasing specialization of our brain—and this specialization, in turn, drives greater cognitive ability and communicative reach.&#xA;&#xA;There’s a lot here to unpack and synthesize, but I wanted to begin bringing these together, because just as I feel myself pushing against the zeitgeist when I argue that calling learning to read “unnatural” isn’t quite right, so too are arguments that learning language is not “innate” swimming against the tide. These two counterclaims are interwoven, and I think worth further exploring.&#xA;&#xA;Consider this post the first in an exploratory series. We’ll geek out on language development and its similarities and differences to literacy development, maybe dig into the relation of cognition and language and literacy a little, and riff on the implications for AI, ANNs, and LLMs.&#xA;&#xA;#language #literacy #natural #innateness #unnatural #reading #neuralnetworks #research #brains #linguistics #models]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/b1U0s1kr.jpeg" alt="Colors of the mind"/>
Language is a uniquely human phenomenon that develops in children with remarkable ease and fluency. Yet questions remain about how we acquire language. Is it innately wired in our brain, or do we learn all facets rapidly from birth?</p>

<p>Two books – <em>Rethinking Innateness</em> and <em>The Language Game</em> – provide us with some fascinating perspectives on language learning that bears implications for how we think about learning to read and write, and furthermore, for how we talk about the power and limitations of AI.
</p>

<h1 id="a-review-of-where-we-ve-been" id="a-review-of-where-we-ve-been">A Review of Where We’ve Been</h1>

<p><a href="https://languageandliteracy.blog/natural-vs">In a previous series</a>, we pursued an interesting debate about whether learning to read is more unnatural than learning oral or signed languages. We also investigated the notion, frequently stated by <a href="https://languageandliteracy.blog/the-science-of-reading">“science of reading”</a> proponents, that <a href="https://write.as/manderson/our-brains-were-not-born-to-read-right">“our brains were not born to read,”</a> while our brains are “hard-wired” for language.</p>

<p>While I agree with researchers Gough, Hillinger, Liberman and others that written language is more complex and abstract than oral language and—hence—more difficult to acquire, I’m not convinced that calling it <em>unnatural</em> is most accurate. Instead, <a href="https://write.as/manderson/a-finale-learning-to-read-and-write-is-a-remarkable-human-feat">I suggest terming it <em>effortful</em></a>.</p>

<p>In <a href="https://write.as/manderson/the-relation-of-speech-to-reading-and-writing">one of the earlier papers</a> we examined, Liberman argued that oral language is pre-cognitive, meaning that it requires no cognition to learn and thus is more natural to acquire. He used this claim to counter the <a href="https://write.as/manderson/learning-to-read-an-unnatural-act">Goodmans’ assertion</a> that oral and written language were largely synonymous, and that kids therefore could learn to read merely through exposure to literacy, rather than explicit instruction in the alphabetic principle (“whole language”). While I most definitely don’t agree with the Goodmans, I paused on Liberman’s claim with some skepticism, as there are a subset of kids who also struggle to develop speech and language skills, just as there are a subset of kids who struggle to develop reading and writing skills.</p>

<p>Liberman also made another strong claim that I paused on: that the evolution of oral language is biological, while written language is cultural (<em>which parallels arguments that language is “biologically primary” while reading and writing are “biologically secondary,” which I have also questioned, given that making the distinction is harder than it seems when social and cultural advancements are deeply interwoven with human existence over generations of time</em>). But I mostly accepted this premise, as it seems to be self-evident that language is baked into our brains. After all, babies begin to attune to languages spoken around them <a href="https://www.wired.com/2013/01/utero-babies-languag/"><em>even while still in the womb</em></a>.</p>

<p>Liberman does not stand on his own in these assertions, I should hasten to add. I just bring one of his papers up because we spent time with it here. Noam Chomsky, for example, has long argued for a <a href="https://en.wikipedia.org/wiki/Universal_grammar">universal grammar</a>, which is taught in foundational courses on linguistics, and the related study of generative grammars is alive and well.</p>

<p>Why is this important? It’s important because whether we consider language “natural” or written language “unnatural” bears implications for how we decide to teach them (or not). If we think of language as completely innate, then perhaps we don’t think it requires much of any teaching that is explicit, systematic, or diagnostic. Or conversely, if we think of written language as wholly unnatural, we may not consider how to strategically design opportunities for implicit learning, volume, and exposure.</p>

<p>Yet I have just read two books, written in two different decades, that provide some really interesting critiques against the widely adopted supposition that language is innate.</p>

<h1 id="language-models" id="language-models">Language Models</h1>

<p>The first book, <em>Rethinking Innateness: A Connectionist Perspective on Development</em>, by Elizabeth Bates, Jeffrey Elman, Mark H. Johnson, Annette Karmiloff-Smith, Domenico Parisi and Kim Plunkett, was published in 1996, and approaches language from the lens of neuroscience, explaining connectionist models and their implications for neural development and learning. These models are not only part of the lineage of the current renaissance of Large Language Models, such as ChapGPT, but also part of a lineage of models that have informed our theoretical understanding of how children learn to read, and may continue to inform explorations of “statistical learning.”</p>

<p>I was led to this book from <a href="https://x.com/drmarcj/status/1662841595408838659?s=20">a recommendation</a> by Marc Joanisse, a researcher at Western University, when he commented on <a href="https://x.com/mandercorn/status/1662805794818076677?s=20">my tweet</a> (are we still calling them that?) about research on artificial neural networks that suggests they can accurately model language learning in human brains.</p>

<p>It was a great recommendation, and I found the book extremely relevant to ongoing conversations about AI and LLMs today, in addition to providing key insights from connectionist models into language and literacy development that challenge assumptions around innateness, such as:</p>
<ul><li>Simulations show that simple learning algorithms and architectures can enable rapid learning and sophisticated representations, such as those seen in younger infant competencies, without any innate knowledge.</li>
<li><a href="https://unt.univ-cotedazur.fr/uoh/learn_teach_FL/affiche_theorie.php?id_activite=53">U-shaped learning</a> and discontinuous change also occur in neural networks without innate knowledge, due to architecture, input, and time spent on learning. This parallels studies of the development of linguistic abilities in children, such as the learning of past-tense and pronouns.</li>
<li>The way in which neural networks learn new things can be simple, yet the learning yields surprisingly complex results. This complexity emerges as the product of many simple interactions over time (<em>this point, written in 1996, seems incredibly prescient to me as a reader in 2023 using Claude2 to distill and summarize my notes from each book for this post</em>).</li>
<li>Connectionist models show global effects can emerge from local interactions rather than centralized control. Connectionist models also show how structured behaviors can emerge in neural networks through exposure to and interactions with the environment, without explicit rules or representations programmed in (which makes me think of <em>statistical learning</em>).</li></ul>

<h1 id="language-games" id="language-games">Language Games</h1>

<p>The second book, <a href="https://mitpressbookstore.mit.edu/book/9781541674981"><em>The Language Game: How Improvisation Created Language and Changed the World</em></a>, by Morten H. Christiansen and Nick Chater, was published last year in 2022, and focuses more on cultural evolution and social transmission of language, arguing that language is akin to a game of charades that is honed and passed on from generation to generation. I happened to check it out from the library and read it concurrently with Rethinking Innateness, and there was some great synergy between the two, especially around challenging the notion that language is innate. Some of the key points of the book:</p>
<ul><li>Language relies on and recruits existing cognitive mechanisms, becoming increasingly specialized through extensive practice and use.</li>
<li>Language evolves culturally to fit the human brain, not the reverse.</li>
<li>Language is shaped for learnability and for coordinating with other learners, not for abstract principles and rules. Children follow paths set by previous generations.</li>
<li>This cultural transmission across generations shapes language to be more learnable through reuse of memorable chunks (“constructions”).</li>
<li>Due to working memory limitations, more memorable chunks survive, causing a design without a designer. These chunks become increasingly standardized over time.</li>
<li>Language input must be processed immediately before it is lost (what the authors call the “Now-or-Never” bottleneck).</li>
<li>Chunking sounds into words and phrases buys more time to process meaning.</li>
<li>Gaining fluency with increasingly larger and more complex constructions of language requires extensive practice.</li></ul>

<h1 id="across-connectionism-and-charades" id="across-connectionism-and-charades">Across Connectionism and Charades</h1>

<p>Together, these books provide a picture of language as an emergent, complex cultural and statistical phenomena that has evolved from simple learning mechanisms across generations. Rather than an innate universal grammar baked into children’s brains, language itself has adapted and molded over time to become essential to our human inheritance, as with clothing, pottery, or fire. Language emerges through social human communication and interaction. It becomes increasingly complex, yet also streamlined and standardized, without any explicit rules governing it beyond the constraints of our brains, tongues, and cognition.</p>

<p>This isn’t to say there isn’t something unique about the human brain architecture in comparison to our closest animal brethren—there clearly is—but rather that language has adapted symbiotically to that architecture, like a parasite, rather than specific parts of our brain that are genetically pre-determined for language.</p>

<p>Like reading, using language drives increasing specialization of our brain—and this specialization, in turn, drives greater cognitive ability and communicative reach.</p>

<p>There’s a lot here to unpack and synthesize, but I wanted to begin bringing these together, because just as I feel myself pushing against the zeitgeist when I argue that calling learning to read “unnatural” isn’t quite right, so too are arguments that learning language is not “innate” swimming against the tide. These two counterclaims are interwoven, and I think worth further exploring.</p>

<p>Consider this post the first in an exploratory series. We’ll geek out on language development and its similarities and differences to literacy development, maybe dig into the relation of cognition and language and literacy a little, and riff on the implications for AI, ANNs, and LLMs.</p>

<p><a href="https://languageandliteracy.blog/tag:language" class="hashtag"><span>#</span><span class="p-category">language</span></a> <a href="https://languageandliteracy.blog/tag:literacy" class="hashtag"><span>#</span><span class="p-category">literacy</span></a> <a href="https://languageandliteracy.blog/tag:natural" class="hashtag"><span>#</span><span class="p-category">natural</span></a> <a href="https://languageandliteracy.blog/tag:innateness" class="hashtag"><span>#</span><span class="p-category">innateness</span></a> <a href="https://languageandliteracy.blog/tag:unnatural" class="hashtag"><span>#</span><span class="p-category">unnatural</span></a> <a href="https://languageandliteracy.blog/tag:reading" class="hashtag"><span>#</span><span class="p-category">reading</span></a> <a href="https://languageandliteracy.blog/tag:neuralnetworks" class="hashtag"><span>#</span><span class="p-category">neuralnetworks</span></a> <a href="https://languageandliteracy.blog/tag:research" class="hashtag"><span>#</span><span class="p-category">research</span></a> <a href="https://languageandliteracy.blog/tag:brains" class="hashtag"><span>#</span><span class="p-category">brains</span></a> <a href="https://languageandliteracy.blog/tag:linguistics" class="hashtag"><span>#</span><span class="p-category">linguistics</span></a> <a href="https://languageandliteracy.blog/tag:models" class="hashtag"><span>#</span><span class="p-category">models</span></a></p>
]]></content:encoded>
      <guid>https://languageandliteracy.blog/language-like-reading-may-not-be-innate</guid>
      <pubDate>Sat, 12 Aug 2023 07:48:06 +0000</pubDate>
    </item>
  </channel>
</rss>