Learning new information in L2 is more effortful than in L1. We found different functional connectivity networks of naturalistic learning through speech among adolescents, confirming this prevalent observation
Does learning language require effort? Does it require more effort when learning a new language later in our lives? Why?
Today, we will highlight a study that shows the additional neurological networks that adolescents activate when learning in a second language – a key insight for all educators to consider.
Language Learning: Effortless for Babies, Effortful for Adults
Babies learn language with such ease that they have already begun to recognize the unique patterns of a language–even to distinguish between the unique patterns of multiple languages–while still in the womb.
We therefore tend to assume there is something wholly innate or natural to learning language.
Yet as we’ve explored previously in a series on this blog, even learning our first languages may not be as innate or natural as it can appear. Human language reflects a unique synchrony between our biological and cultural evolution, finely attuned to the social environment in which we interact.
When I typically begin a series of blogs to conduct nerdy inquiry into an abstract topic, I don't generally know where I'm going to end up. This series on LLMs was unusual in that in our first post, I outlined pretty much the exact topics I would go on to cover.
Here's where I had spitballed we might go:
The surprisingly inseparable interconnection between form and meaning
Blundering our way to computational precision through human communication; Or, the generative tension between regularity and randomness
The human (and now, machine) capacity for learning and using language may simply be a matter of scale
Is language as separable from thought (and, for that matter, from the world) as Cormac McCarthy said?
Implicit vs. explicit learning of language and literacy
Indeed, we then went on to explore each of these areas, in that order. Cool!
“The success of large language models is the biggest surprise in my intellectual life. We learned that a lot of what we used to believe may be false and what I used to believe may be false. I used to really accept, to a large degree, the Chomskyan argument that the structures of language are too complex and not manifest in input so that you need to have innate machinery to learn them. You need to have a language module or language instinct, and it’s impossible to learn them simply by observing statistics in the environment.
If it’s true — and I think it is true — that the LLMs learn language through statistical analysis, this shows the Chomskyan view is wrong. This shows that, at least in theory, it’s possible to learn languages just by observing a billion tokens of language.”
In a previous series, “Innate vs. Developed,” we’ve also challenged the idea that language is entirely hardwired in our brains, highlighting the tension between our more recent linguistic innovations and our more ancient brain structures. Cormac McCarthy, the famed author of some of the most powerful literature ever written, did some fascinating pontificating on this very issue.
In this post, we’ll continue picking away at these tensions, considering implications for AI and LLMs.
“Over cultural evolution, the human species was so pressured for increased information capacity that they invented writing, a revolutionary leap forward in the development of our species that enables information capacity to be externalized, frees up internal processing and affords the development of more complex concepts. In other words, writing enabled humans to think more abstractly and logically by increasing information capacity. Today, humans have gone to even greater lengths: the Internet, computers and smartphones are testaments to the substantial pressure humans currently face — and probably faced in the past — to increase information capacity.”
According to the perspectives of the authors in the paper quoted above, the capacity to process and manage vast quantities of information is a defining characteristic of human intelligence. This ability has been extended over time through the development of tools and techniques for externalizing information, such as via language, writing, and digital technology. These advancements have, in turn, allowed for increasingly abstract and complex thought and technologies.
The paper by Jessica Cantlon & Steven Piantadosi further proposes that the power of scaling is what lies behind human intelligence, and that this power of scaling is what further lies behind the remarkable results achieved by artificial neural networks in areas such as speech recognition, LLMs, and computer vision, and that these accomplishments have not been achieved through specialized representations and domain-specific development, but rather through the use of simpler techniques combined with increased computational power and data capacity.
Regularity and irregularity. Decodable and tricky words. Learnability and surprisal. Predictability and randomness. Low entropy and high entropy.
Why do such tensions exist in human language? And in our AI tools developed to both create code and use natural language, how can the precision required for computation co-exist alongside this necessary complexity and messiness of our human language?
”. . . the fact, as suggested by these findings, that semantic properties can be extracted from the formal manipulation of pure syntactic properties – that meaning can emerge from pure form – is undoubtedly one of the most stimulating ideas of our time.”
In our last post, we began exploring what Large Language Models (LLMs) and their uncanny abilities might tell us about language itself. I posited that the power of LLMs stems from the statistical nature of language.
“Semantic gradients,” are a tool used by teachers to broaden and deepen students' understanding of related words by plotting them in relation to one another. They often begin with antonyms at each end of the continuum. Here are two basic examples:
Now imagine taking this approach and quantifying the relationships between words by adding numbers to the line graph. Now imagine adding another axis to this graph, so that words are plotted in a three dimensional space in their relationships. Then add another dimension, and another . . . heck, make it tens of thousands more dimensions, relating all the words available in your lexicon across a high dimensional space. . .
. . . and you may begin to envision one of the fundamental powers of Large Language Models (LLMs).
Thanks to a podcast, Emerging Research in Educational Psychology, from professor Jeff Greene speaking with professor Erika Patall about a meta-analysis she was the lead author on, I learned about her paper that looked across a large number of studies to synthesize findings on the impact of classroom structure. I thought some of the high-level takeaways were well worth highlighting with you for our 4th research highlight in this series!
Citation: Patall, E. A., Yates, N., Lee, J., Chen, M., Bhat, B. H., Lee, K., Beretvas, S. N., Lin, S., Man Yang, S., Jacobson, N. G., Harris, E., & Hanson, D. J. (2024). A meta-analysis of teachers’ provision of structure in the classroom and students’ academic competence beliefs, engagement, and achievement. Educational Psychologist, 59(1), 42–70. https://doi.org/10.1080/00461520.2023.2274104
I think it’s no surprise to most educators that providing structure for kids, both in terms of the classroom environment and culture, and in terms of the design of instructional tasks, is critical to improving student learning. Part of this work is what we often term “classroom management,” but as the paper describes, the work is far more encompassing than that:
“In sum, creating structure is a multifaceted endeavor that involves a diverse assortment of teacher practices that can be used independently or in various combinations, as well as to various extents, and are all intended to organize and guide students’ school-relevant behavior in the process of learning in the classroom.”
Paper Citation: Philip Capin, Sharon Vaughn, Joseph E. Miller, Jeremy Miciak, Anna-Mari Fall, Greg Roberts, Eunsoo Cho, Amy E. Barth, Paul K. Steinle & Jack M. Fletcher (2023) Investigating the Reading Profiles of Middle School Emergent Bilinguals with Significant Reading Comprehension Difficulties, Scientific Studies of Reading, DOI: 10.1080/10888438.2023.2254871
A few months ago, a study crossed my radar that caused me to stop, print it out, mark it up, and then begin digging into related studies, which is what I do when a study grabs my attention.
Getting into research is akin to getting into Miles Davis—if you like a given song or album, you may start checking out the other musicians he plays with, and they'll lead you into a new and ever expanding fractal universe, because Davis had a knack for collaborating with musicians who were geniuses in their own right. A few examples: John Coltrane, Tony Williams, Keith Jarrett, Herbie Hancock, John McLaughlin, Wayne Shorter, Jack DeJohnette, the list goes on and on.