Language & Literacy

Musings about language and literacy and learning

The learning ecosystem

I haven’t written many posts in 2025; here are the measly few I’ve managed to squeak out:

While my bandwidth to peruse research has diminished this year (work has been busy, and I like spending time with my children) I have still encountered a fair number of compelling studies. In keeping with the tradition begun in 2023, and building on last year’s review, I am endeavoring to round up the research that has crossed my radar over the last 12 months.

This year presents a difficult juncture for research. Political aggression against academic institutions, the immigrants who power their PhD programs, and the federal contracts essential to their survival has disrupted research. Despite this, strong research continues to be published. Because research is a slow-moving endeavor, I suspect the full effects of these disruptions will manifest increasingly in future roundups; for now, the good work persists.

The research landscape of 2025 highlights a continued shift toward experience-dependent plasticity. This view treats the human mind as a dynamic ecosystem shaped by biological rhythms, cultural “software,” and technological catalysts. Learning is no longer seen as a linear accumulation of skills, but as a sophisticated orchestration of “statistical” internal models and external social and cultural and technological attunements.

Longtime readers will recognize this “ecosystem” view from my other blog on Schools as Ecosystems. It is validating to see the field increasingly adopting this ecological lens—viewing the learner not as an isolated machine, but as an organism deeply embedded in a biological and cultural context.

Our “big buckets” for this year have ended up mirroring the 2024 roundup, which means, methinks, that we have settled upon a perennial organizational structure:

  • The Science of Reading and Writing
  • Content Knowledge as an Anchor to Literacy
  • Studies on Language Development
  • Multilinguals and Multilingualism
  • Rhythm, Attention, and Memory
  • School, Social-Emotional, and Contextual Effects
  • The Frontier of Artificial Intelligence and Neural Modeling

Let’s jump in!

Read more...

In the typical Hollywood action movie, a hero acquires master-level skill in a specialized art, such as Kung Fu, in a few power ballad-backed minutes of a training montage. 

In real life, it may seem self-evident that gaining mastery takes years of intense, deliberate, and guided work. Yet the perennial optimism of students cramming the night before an exam tells us that the pursuit of a cognitive shortcut may be an enduring human impulse.

It is unsurprising, then, that students—and many adults—increasingly use the swiftly advancing tools of AI and Large Language Models (LLMs) as a shortcut around deeper, more effortful cognitive work.

Read more...

The Surprising Cognitive Science of a Walk in the Park

The capacity for intense focus in our students is a finite resource—a cognitive fuel tank that can, and does, run low. We see the results in the classroom: irritability, impatience, and a fraying of impulse control. But what if one of the most powerful tools for refueling that tank wasn't a new pedagogical strategy, but something far more fundamental?

Five years ago, I wrote about the profound impact that greenery can have on health and learning in The Influence of Greenery on Learning. When I recently listened to Dr. Marc Berman, Director of the Environmental Neuroscience Lab at the University of Chicago, expand on this research on the Many Minds podcast, it prompted me to revisit that post. I was humbled to realize how many of his foundational studies I had completely overlooked. This new understanding reveals that nature is not just an amenity, but a necessity for cognition.

Read more...

Language is the everpresent medium of teaching and learning, the element that infuses every classroom interaction. Yet, how often do we explicitly plan the content, structure, and quality of this critical element?

While we meticulously map out and prepare for the activities we engage our students in, the specific linguistic structures and vocabulary we employ often remains implicit, almost accidental. This raises critical questions: which aspects of our classroom talk truly accelerate literacy – is it sheer volume, vocabulary precision, or syntactic complexity? And how can we become more deliberate and intentional architects of this vital linguistic environment for all students, including those developing multi-dialectalism and multilingualism?

My recent presentation at ResearchED in NYC ventured into this territory, examining the research on how the linguistic environment we curate can influence student literacy achievement.

Read more...

Stacks of papers

2024 was another great year filled with fascinating research.

Over the course of this year, I’ve written a few posts about some of it:

Last year, I began a tradition that seems worth maintaining: reviewing all the sundry research that has come across my radar over the course of 2024.

Read more...

Squirrels on a book

Learning new information in L2 is more effortful than in L1. We found different functional connectivity networks of naturalistic learning through speech among adolescents, confirming this prevalent observation

–Tweet from McGill University Professor Gigi Luk

Does learning language require effort? Does it require more effort when learning a new language later in our lives? Why?

Today, we will highlight a study that shows the additional neurological networks that adolescents activate when learning in a second language – a key insight for all educators to consider.

Language Learning: Effortless for Babies, Effortful for Adults

Babies learn language with such ease that they have already begun to recognize the unique patterns of a language–even to distinguish between the unique patterns of multiple languages–while still in the womb.

We therefore tend to assume there is something wholly innate or natural to learning language.

Yet as we’ve explored previously in a series on this blog, even learning our first languages may not be as innate or natural as it can appear. Human language reflects a unique synchrony between our biological and cultural evolution, finely attuned to the social environment in which we interact.

Read more...

Novice bunny and expert bunny on bikes When I typically begin a series of blogs to conduct nerdy inquiry into an abstract topic, I don't generally know where I'm going to end up. This series on LLMs was unusual in that in our first post, I outlined pretty much the exact topics I would go on to cover.

Here's where I had spitballed we might go:

  • The surprisingly inseparable interconnection between form and meaning
  • Blundering our way to computational precision through human communication; Or, the generative tension between regularity and randomness
  • The human (and now, machine) capacity for learning and using language may simply be a matter of scale
  • Is language as separable from thought (and, for that matter, from the world) as Cormac McCarthy said?
  • Implicit vs. explicit learning of language and literacy

Indeed, we then went on to explore each of these areas, in that order. Cool!

Read more...

NYC skyline

The Surprising Success of Large Language Models

“The success of large language models is the biggest surprise in my intellectual life. We learned that a lot of what we used to believe may be false and what I used to believe may be false. I used to really accept, to a large degree, the Chomskyan argument that the structures of language are too complex and not manifest in input so that you need to have innate machinery to learn them. You need to have a language module or language instinct, and it’s impossible to learn them simply by observing statistics in the environment.

If it’s true — and I think it is true — that the LLMs learn language through statistical analysis, this shows the Chomskyan view is wrong. This shows that, at least in theory, it’s possible to learn languages just by observing a billion tokens of language.”

–Paul Bloom, in an interview with Tyler Cowen

Read more...

Through the window In our series on AI, LLMs, and Language so far we’ve explored a few implications of LLMs relating to language and literacy development:

1) LLMs gain their uncanny powers from the statistical nature of language itself; 2) the meaning and experiences of our world are more deeply entwined with the form and structure of our language than we previously imagined; 3) LLMs offer an opportunity for further convergence between human and machine language; and 4) LLMs can potentially extend our cognitive abilities, enabling us to process far more information.

In a previous series, “Innate vs. Developed,” we’ve also challenged the idea that language is entirely hardwired in our brains, highlighting the tension between our more recent linguistic innovations and our more ancient brain structures. Cormac McCarthy, the famed author of some of the most powerful literature ever written, did some fascinating pontificating on this very issue.

In this post, we’ll continue picking away at these tensions, considering implications for AI and LLMs.

Read more...

The Octopus

“Over cultural evolution, the human species was so pressured for increased information capacity that they invented writing, a revolutionary leap forward in the development of our species that enables information capacity to be externalized, frees up internal processing and affords the development of more complex concepts. In other words, writing enabled humans to think more abstractly and logically by increasing information capacity. Today, humans have gone to even greater lengths: the Internet, computers and smartphones are testaments to the substantial pressure humans currently face — and probably faced in the past — to increase information capacity.”

Uniquely human intelligence arose from expanded information capacity, Jessica Cantlon & Steven Piantadosi

According to the perspectives of the authors in the paper quoted above, the capacity to process and manage vast quantities of information is a defining characteristic of human intelligence. This ability has been extended over time through the development of tools and techniques for externalizing information, such as via language, writing, and digital technology. These advancements have, in turn, allowed for increasingly abstract and complex thought and technologies.

The paper by Jessica Cantlon & Steven Piantadosi further proposes that the power of scaling is what lies behind human intelligence, and that this power of scaling is what further lies behind the remarkable results achieved by artificial neural networks in areas such as speech recognition, LLMs, and computer vision, and that these accomplishments have not been achieved through specialized representations and domain-specific development, but rather through the use of simpler techniques combined with increased computational power and data capacity.

Read more...

Enter your email to subscribe to updates.