Scaling Our Capacity for Processing Information

The Octopus

“Over cultural evolution, the human species was so pressured for increased information capacity that they invented writing, a revolutionary leap forward in the development of our species that enables information capacity to be externalized, frees up internal processing and affords the development of more complex concepts. In other words, writing enabled humans to think more abstractly and logically by increasing information capacity. Today, humans have gone to even greater lengths: the Internet, computers and smartphones are testaments to the substantial pressure humans currently face — and probably faced in the past — to increase information capacity.”

Uniquely human intelligence arose from expanded information capacity, Jessica Cantlon & Steven Piantadosi

According to the perspectives of the authors in the paper quoted above, the capacity to process and manage vast quantities of information is a defining characteristic of human intelligence. This ability has been extended over time through the development of tools and techniques for externalizing information, such as via language, writing, and digital technology. These advancements have, in turn, allowed for increasingly abstract and complex thought and technologies.

The paper by Jessica Cantlon & Steven Piantadosi further proposes that the power of scaling is what lies behind human intelligence, and that this power of scaling is what further lies behind the remarkable results achieved by artificial neural networks in areas such as speech recognition, LLMs, and computer vision, and that these accomplishments have not been achieved through specialized representations and domain-specific development, but rather through the use of simpler techniques combined with increased computational power and data capacity.

I think the authors may be overselling scaling as the main factor behind intelligence, but scale most definitely plays a leading role alongside brain and neural network architecture and specialized data, and it most definitely plays a role in how human language is used and developed.

The Potential of Scale

“LLMs give us a very effective way of accessing information from other humans.”

–Alison Gopnik in an interview with Julien Crockett in the Los Angeles Review of Books

In our previous explorations of language, cognition, and Large Language Models (LLMs), the recurring theme of the power of scale has certainly emerged.

We've delved into the statistical nature of language, where the vast interconnectedness of word combinations and their contextual relationships drive LLMs' generative abilities. We've pondered the inherent imprecision of human language and the journey towards computational precision in LLMs. And throughout, the concept of scale has remained central – the scale of data, the scale of computation, and the scale of language itself.

It's intriguing to consider the possibility, as this paper suggests, that the capacity to process increasing amounts of information may have been a key factor in the development of human intelligence. This idea extends to how, as a species, we have continually sought ways to expand our ability to store and access information, from the invention of writing to the development of computers, the internet, and smartphones.

This suggests that the most exciting potential of artificial neural networks such as LLMs may lie not only in their ability to respond to and generate human language, but furthermore in their ability to help us to process and manage vast quantities of information, and thus further extend our cognitive capabilities. When framed in this manner, it shifts the debate from whether LLMs already demonstrate human intelligence and whether they will soon achieve superhuman intelligence, to whether LLMs will indeed equip us with superhuman abilities. And – as always with advancements in powerful technologies – the question is who among us will gain the most from those abilities and whether the new tools will further increase or diminish disparities between groups (i.e. “the future is already here — it's just not very evenly distributed”).

So we’ve explored a few implications of LLMs relating to language and literacy development so far, then: 1) LLMs gain the base for their uncanny powers from the statistical nature of language itself; 2) LLMs present us with an opportunity for further convergence between human and machine language; and 3) LLMs present us with an opportunity to further extend our cognitive abilities by allowing us to process far more information.

The Dark Side of Scale

All of this said, there is a dark side to scale, as Geoffrey West elucidates in his book, Scale (more on this on my other blog, Schools & Ecosystems), which is that as we consume far more energy and create far more waste beyond our biological needs and functions than any other creature on earth as we continue to scale our technologies. As West describes it, we humans are energy-guzzling behemoths, using thirty times more energy than nature intended for creatures our size. Our outsized energy footprint makes our 7.3 billion population act as if it were in excess of 200 billion people. And we are hitting the upper limits on ecological constraints of the earth as we do so.

Similarly, as LLMs extend our capabilities, they consume ever more power as they consume and produce ever more data. So at the very same time that our earth is rapidly accelerating towards critical thresholds of environmental change and wreaking havoc on insect, animal, soil, and plant life, we are rapidly accelerating our consumption of energy and production of waste.

It’s hard to see a clear end in sight to this. It’s possible that the greedy demands of continuing to scale AI model training and use ends up leading to rapid development of greener technologies and accelerated efficiency in digital computation and compression. It’s just as possible that in our short-sighted endeavors we put a half-life on human civilization via no longer containable war, famine, disaster, and disease.

Not to end this post on such a sour note, but it is important to bear a healthy skepticism about a new technology and its attendant powers, even as we seek to gain from it. And from what I see in the discourse, it seems to me that there has been a pretty healthy mix of boosterism and critique and excitement and paranoia about it all, so I’m enjoying the ride, nonetheless.

#cognition #language #AI #LLMs #technology #brains #scale