Language & Literacy

Musings about language and literacy and learning

The “science of reading” has become a loaded term — partly due to how “science” itself is conceived.

In Part I, we examined a 2003 article by Keith Stanovich that proposed 5 different “styles” that can influence how science is conducted and perceived. In that article, we learned that in education there may be a tendency to lean towards “coherence” in narratives or the “uniqueness” of silver bullet fads. These tendencies can subvert science-based reading practice.

In Part II, we will look at yet another stellar 2003 piece by Paula and Keith Stanovich titled, “Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research To Make Curricular & Instructional Decisions.”


I’ve observed an interesting divide in how people react to and interpret the term “the science of reading” (or “SOR” for short).

For some, the term elicits eager head nodding — it’s even become incorporated into the sales pitch of many a vendor of education products. For others, the term elicits a gut reaction akin to disgust.

There’s a lot wrapped up in how someone may think of “science” at large that then influences their reactions to the term of the “science of reading.” But don’t just take my word for it. Keith and Paula Stanovich penned some really insightful pieces about this in the early 2000s, and outlined how educators can understand and leverage science to inform their own instructional practice.


After the recent mass murders by disturbed teenagers with all-too readily available assault weapons, it’s hard to see a way forward given the dysfunction of our political system. Short of federal gun regulation, there are other areas we can influence that could help to prevent troubled teenagers from making plans to hurt themselves and others.

There’s a debate that has flared up around mass shootings that over-simplifies the issues into gun control vs. mental health. Yet these both need to be part of the conversation. We need to decrease access to assault weapons, while increasing access to sustained mental health services.


One of the keynotes at the NYC Learning and the Brain conference I’d just posted about was by Jonathan Gottschall on “The Story Paradox,” and which was based on his book of the same name.

His talk was compelling enough that I promptly read his book as well. Like Gottschall, I’ve done some pondering about Plato’s long ago warnings against the power of the written word back when I did a nerdy deep dive into the roots of close reading (“Close Reading: The Context of an Exigesis“), so this idea that storytelling can be a double edged sword resonated with me. And his warnings about the dangers of storytelling, particularly through social media, seemed an important part of the puzzle of the rise of Trump, the far right, and QAnon, amongst other phenomenon such as anti-vaxxers during this turbulent age dominated by Facebook and Twitter.


I wrote a little while ago about Andrew Watson’s excellent book, “The Goldilocks Map.” I had an opportunity to attend a Learning and the Brain conference, which was what sparked Andrew’s own journey into brain research and learning to balance openness to new practice with a healthy dose of skepticism. In fact, Andrew was one of the keynote presenters at this conference – and I think his trenchant advice provided an important grounding for consideration of many of the other presentations.

I think there’s something in the nature of presenting to a general audience of educators that compels researchers to attempt to derive generalized implications of their research that can all too easily overstep the confines of their very specialized and specific domains.


There was a relatively recent Hechinger Report article by Jill Barshay, PROOF POINTS: Researchers blast data analysis for teachers to help students that seemed to indict any and all assessments and data use in schools as a royal waste of time. It bothered me because the only source cited explicitly in the article was a 2020 opinion piece by a professor who similarly vaguely discusses “interim assessment” and doesn’t provide explicit citations of her sources.

I tweeted out my annoyance to this effect.

To Ms. Barshay’s great credit, she responded with equanimity and generosity to my tweet with multiple citations.

Since she took that time for me, I wanted to reciprocate by taking the time to review her sources with an open mind, as well as reflect on where I might land after doing so.


Ontogenesis model

A recent paper caught my eye, Ontogenesis Model of the L2 Lexical Representation, and despite the immediate mind glazing effect of the word “ontogenesis,” I found the model well worth digging into and sharing here—and it may bear relevance to conversations on orthographic mapping.

How we learn words and all their phonological, morphological, orthographic, and semantic characteristics is a fascinating topic of research—most especially in the areas of written word recognition and in the learning of a new language.


*Back in 2013, I wrote a series of posts for the Core Knowledge Foundation blog that were titled, “Promethean Plan: A Teacher on Fulfilling the Intent of the Common Core.” Unfortunately, they don’t appear to be available there anymore, so I thought it could be fun to re-post them collected here as one post, both to archive it and also to see whether the mistakes I outlined were indeed part of the squandering of the opportunity presented by the CCSS.

My 2013 classroom self, as you will see, was a bit more grandiose, but methinks I made a few good points. I’ll leave the rest to your consideration.*


I met Andrew Watson at a post Research Ed pub event in Philly a few years back. I’m an introvert and not terribly excited about talking to strangers in noisy settings, and saw Andrew standing there looking aloof, so he seemed like someone good to chat with. I had no idea who he was, just recall him saying something about neuroscience something or other and conferences, he may have had a card. I probably, in my naiveté, thought he was shilling for a company or something (lol). Anyway, I later followed him on Twitter, and over time began to really appreciate his wise and often sardonic takes on education and research, and made a note to check out his book, The Goldilocks Map.

I do take a while to get around to education related books, but I’ve been reading more and more research these days, and it seemed like high time to finally pick up his book, as I am a complete amateur in understanding methods or anything, really, beyond the abstract of most papers.

And I’m really glad I did, and I recommend you do, too. He’s got a dry wit that can make you laugh out loud, while at the same time dropping critical knowledge throughout in a clear and concise way.

The Goldilocks Map is about striking a just right balance of openness to new evidence-based teaching methods, while at the same time maintaining a disciplined skepticism to ensure that you are not jumping into the latest edu brain fad that will waste your, your colleagues’, and your students’ precious learning time.

Watson gives classroom teachers a step-by-step process for determining whether or not to listen to the latest wisdom bestowed upon you in a PD, starting with asking any source of a new practice, “What’s the best research you know of that supports it?” How your source responds to that question can immediately tell you whether or not to go further.

I’ve taken up this quest since reading his book, and I had a really great interaction with a well-respected researcher, in which he acknowledged that a particular passage in one of his papers may have been a bit over-emphatic, and pointed to some more nuanced research findings that complicated the issue. Boom. He has thus become a trusted source for me. As Watson puts it in his book, “Trustworthy sources want us to want more information.” Indeed.

Watson gives us questions, tools, and shortcuts for digging deeper into real research, and actually, part of the fun of reading his book is watching him surgically dissect key studies over the course of the chapters. It’s a tour de force.

One interesting personal takeaway I had from reading his book was that my purpose and methodology in reading education research is a bit different than some of these approaches–and that’s OK. I read research more like the former English major that I am — I typically read for thematic patterns, well crafted ideas, and arguments that accumulate across papers. And for my purposes — as someone now outside of the classroom less interested in specific practices I can apply tomorrow, and more interested in key frameworks and models that can help to inform district and school-wide approaches, as well as classroom practices, that can make sense.

That said, I took away sharp and insightful understandings and approaches to reading research with a more informed and critical eye. Watson is not afraid to get technical, and I’m going to need to go back and re-read the book to really internalize and apply some of his methods.

I highly recommend picking up this book and adding to your collection.

#bookreview #research #PD #evidence


We recently examined Phillip Gough and Michael Hillinger’s 1980 paper, Learning to Read: An Unnatural Act, in which they made a neat analogy of learning to decode an alphabetic writing system to cryptanalysis. As a part of this cryptanalysis, children aren’t simply learning to decode, but more precisely, learning to decipher the written code. This distinction highlights that learning to read in English is not driven by paired-associative learning, but rather by internalizing an algorithm, a statistical, systematic, quasi-regular mapping.

This point is a sharp one because what they were saying is that we can’t teach such a cipher directly. We can’t just hand a kid the codebook.

So when I saw a reference recently to another Gough paper called Reading, spelling, and the orthographic cipher, co-written in 1992 with Connie Juel and Priscilla Griffith, I knew I needed to read this one, too.


Enter your email to subscribe to updates.