Language & Literacy

learning

NYC skyline

The Surprising Success of Large Language Models

“The success of large language models is the biggest surprise in my intellectual life. We learned that a lot of what we used to believe may be false and what I used to believe may be false. I used to really accept, to a large degree, the Chomskyan argument that the structures of language are too complex and not manifest in input so that you need to have innate machinery to learn them. You need to have a language module or language instinct, and it’s impossible to learn them simply by observing statistics in the environment.

If it’s true — and I think it is true — that the LLMs learn language through statistical analysis, this shows the Chomskyan view is wrong. This shows that, at least in theory, it’s possible to learn languages just by observing a billion tokens of language.”

–Paul Bloom, in an interview with Tyler Cowen

Read more...

Through the window In our series on AI, LLMs, and Language so far we’ve explored a few implications of LLMs relating to language and literacy development:

1) LLMs gain their uncanny powers from the statistical nature of language itself; 2) the meaning and experiences of our world are more deeply entwined with the form and structure of our language than we previously imagined; 3) LLMs offer an opportunity for further convergence between human and machine language; and 4) LLMs can potentially extend our cognitive abilities, enabling us to process far more information.

In a previous series, “Innate vs. Developed,” we’ve also challenged the idea that language is entirely hardwired in our brains, highlighting the tension between our more recent linguistic innovations and our more ancient brain structures. Cormac McCarthy, the famed author of some of the most powerful literature ever written, did some fascinating pontificating on this very issue.

In this post, we’ll continue picking away at these tensions, considering implications for AI and LLMs.

Read more...

Natural digital

Regularity and irregularity. Decodable and tricky words. Learnability and surprisal. Predictability and randomness. Low entropy and high entropy.

Why do such tensions exist in human language? And in our AI tools developed to both create code and use natural language, how can the precision required for computation co-exist alongside this necessary complexity and messiness of our human language?

Read more...

A statistical tapestry

”. . . the fact, as suggested by these findings, that semantic properties can be extracted from the formal manipulation of pure syntactic properties – that meaning can emerge from pure form – is undoubtedly one of the most stimulating ideas of our time.”

The Structure of Meaning in Language: Parallel Narratives in Linear Algebra and Category Theory

In our last post, we began exploring what Large Language Models (LLMs) and their uncanny abilities might tell us about language itself. I posited that the power of LLMs stems from the statistical nature of language.

But what is that statistical nature of language?

Read more...

“Semantic gradients,” are a tool used by teachers to broaden and deepen students' understanding of related words by plotting them in relation to one another. They often begin with antonyms at each end of the continuum. Here are two basic examples:

Semantic gradient examples

Now imagine taking this approach and quantifying the relationships between words by adding numbers to the line graph. Now imagine adding another axis to this graph, so that words are plotted in a three dimensional space in their relationships. Then add another dimension, and another . . . heck, make it tens of thousands more dimensions, relating all the words available in your lexicon across a high dimensional space. . .

. . . and you may begin to envision one of the fundamental powers of Large Language Models (LLMs).

Read more...

an organized classroom

Thanks to a podcast, Emerging Research in Educational Psychology, from professor Jeff Greene speaking with professor Erika Patall about a meta-analysis she was the lead author on, I learned about her paper that looked across a large number of studies to synthesize findings on the impact of classroom structure. I thought some of the high-level takeaways were well worth highlighting with you for our 4th research highlight in this series!

  • Citation: Patall, E. A., Yates, N., Lee, J., Chen, M., Bhat, B. H., Lee, K., Beretvas, S. N., Lin, S., Man Yang, S., Jacobson, N. G., Harris, E., & Hanson, D. J. (2024). A meta-analysis of teachers’ provision of structure in the classroom and students’ academic competence beliefs, engagement, and achievement. Educational Psychologist, 59(1), 42–70. https://doi.org/10.1080/00461520.2023.2274104

I think it’s no surprise to most educators that providing structure for kids, both in terms of the classroom environment and culture, and in terms of the design of instructional tasks, is critical to improving student learning. Part of this work is what we often term “classroom management,” but as the paper describes, the work is far more encompassing than that:

“In sum, creating structure is a multifaceted endeavor that involves a diverse assortment of teacher practices that can be used independently or in various combinations, as well as to various extents, and are all intended to organize and guide students’ school-relevant behavior in the process of learning in the classroom.”

Read more...

I wrote a little while ago about Andrew Watson’s excellent book, “The Goldilocks Map.” I had an opportunity to attend a Learning and the Brain conference, which was what sparked Andrew’s own journey into brain research and learning to balance openness to new practice with a healthy dose of skepticism. In fact, Andrew was one of the keynote presenters at this conference – and I think his trenchant advice provided an important grounding for consideration of many of the other presentations.

I think there’s something in the nature of presenting to a general audience of educators that compels researchers to attempt to derive generalized implications of their research that can all too easily overstep the confines of their very specialized and specific domains.

Read more...

Ontogenesis model

A recent paper caught my eye, Ontogenesis Model of the L2 Lexical Representation, and despite the immediate mind glazing effect of the word “ontogenesis,” I found the model well worth digging into and sharing here—and it may bear relevance to conversations on orthographic mapping.

How we learn words and all their phonological, morphological, orthographic, and semantic characteristics is a fascinating topic of research—most especially in the areas of written word recognition and in the learning of a new language.

Read more...

In our last post in a series exploring the question, “What is (un)natural about learning to read and write?,” we looked at a paper from 1980 by Phillip Gough and Michael Hillinger, Learning to Read: An Unnatural Act, that provided a counter to Ken and Yetta Goodman’s argument that learning to read is natural, and provided us with a useful analogy: learning to read an alphabetic writing system is a form of cryptanalysis. Using this analogy, Gough and Hillinger drew out a fine-grained distinction between a code and a cipher that allowed them to make some precise observations about the difficulty of breaking the alphabetic cipher that have held up quite well over the years.

Read more...

Sharing a fun paper to geek out on with my fellow language nerds, How children learn to communicate discriminatively by Michael Ramscar. In this paper, the author makes an argument that the contrasting forces of “discriminability” and “regularity” both serve to make language something we pick up pretty much naturally, even if we don’t know all the words in the language.

“…the existence of regular and irregular forms represents a trade-off that balances the opposing communicative pressures of discriminability and learnability in the evolution of communicative codes. From this perspective, the existence of frequent, well-discriminated irregular forms serves to make important communicative contrasts more discriminable and thus also more learnable. By contrast, because regularity entails less discriminability, learners’ representations of lexico-morphological neighbourhoods will tend to be more generic, which causes the forms of large numbers of less frequent items to be learned implicitly, compensating for the incompleteness of individual experience.”

The language of this paper is, as you can see, a bit opaque, so much of this went just a bit over my head, but I found the arguments fascinating given the debates that happen about how to teach the “irregular” spelling of so many words in the English language. Here, the author seems to suggest (I may be over-extrapolating as I often tend to do, but this is what got me geeking out on it) that in fact there is some level of constructive tension between language forms that show up again and again, and the language forms that are more infrequent, but thus inherently gain more of our attention. This relates to the theory of “statistical learning” with which we not only learn language, but also when we map a language to its written form.

The author later provides what I thought was a very concrete thought experiment that demonstrates this principle when he moved from morphology to names:

Imagine that 33% of males are called John, and only 1% Cornelius. In this scenario, learning someone is named Cornelius is more informative than learning their name is John (Corneliuses are better discriminated by their names than Johns). On the other hand, Johns will be easier to remember (guessing ‘John’ will be correct 1/3 of the time). Further, although the memory advantage of John relies on its frequency, the memorability of Cornelius also benefits from this: Cornelius is easier to remember if the system contains fewer names (also, as discussed earlier, if John is easier to say than Cornelius, this will reduce the average effort of name articulation).

What is also interesting about the author’s argument in this paper connecting information theory to language learning is that these assertions are empirically testable:

“Whether these mathematical points about sampling and learning actually apply to human learners are empirical questions. This account makes clear predictions in regard to them: if learners are exposed to sets of geometrically distributed forms, they should acquire models of their probabilities that better approximate one another than when learning from other distributions. Conversely, if learning from geometric distributions does not produce convergence, it would suggest the probabilistic account of communication described here (indeed, any probabilistic account of communication) is false.”

There’s a lot more in the paper to nerd out on–I found the section on verbs especially interesting, for example, given that it connects to some other tidbits on the power and challenge of verbs I’ve come across before:

I’ll leave the rest to you!

#verbs #regularity #irregularity #learning #language #statisticallearning #probability #discriminability #informationtheory #form