By Nathan Curland
Let me start by stating that A Thousand Brains by Jeff Hawkins is likely the most profound and engaging book on neuroscience that I have ever read. That is because it is more than just neuroscience; it is an attempt to present a new theory of brain function to a general audience and then extrapolate to what it means for the future of the human race.
Hawkins’ goal is to appeal not just to other neuroscientists, but also to the layperson who wonders where our intelligence, or even our consciousness, comes from. As such, he gives us enough information to understand what has been learned in the last couple of decades about brain structure but does not dwell on the detailed chemistry that other neuroscience books devote many pages to. (I.e., it is enough for you to know what a neuron or synapse is/does, without knowing the various molecules or the interactions involved to perform their functions).
Along the way, Hawkins shares with us the journey that he and his team of researchers took to get to this theory. However, he is careful to point out that there are still many details and experimental results that must be filled in before it becomes a truly confirmed theory (à la evolution). Many times he refers to it as a “framework,” but one that he has much confidence in.
Before getting to the particulars, it is important to understand Hawkins’ background. His original degree was in electrical engineering and he initially worked at Intel. However, he was always fascinated by the brain and how it functioned. So, inspired by an article by Francis Crick (of DNA fame), he explored working in a doctoral program that would encompass an overall theory of brain function. He applied to and was rejected by a number of institutions either because the brain was already just thought of as a computational machine (MIT’s Artificial Intelligence lab) or the project was too ambitious and he would not accomplish enough in the requisite time (typically five years) to successfully get his doctorate.
Discouraged, he went back to electrical engineering and, with others, founded Palm Computer Inc. – and made a lot of money. After 10 years he decided to go back to his primary love, neuroscience. He gracefully resigned from his company and in 2002 created the Redwood Neuroscience Institute, a collaboration of a number of university neuroscientists interested in working on brain science. However, he soon found this arrangement unworkable, since most of the professors had their own ideas on what they should work on, rather than a common goal.
So after three years he turned the Institute over to the University of California, Berkeley (who had rejected his thesis proposal a decade earlier!) and founded Numenta, his private independent research group, staffing it with bright scientists committed to his vision. Besides their own research, studies, and computer simulations of the brain, Numenta served as a focal point for conferences and a meeting place for neuroscientists from all over the world to come, share data, and discuss.
Old Brain vs. New Brain
The book is divided into three sections. Section I, “A New Understanding of the Brain,” discusses the new theory, how it differs from existing theory, and how it better explains the myriad of data that scientists have discovered and published over many decades of neurological research. Existing theories treat the brain as a purely hierarchical structure with information coming in from the various senses and then being transferred up to higher and higher levels, becoming more detailed and complex at each level —almost like the design of a computer, as is the structure of today’s AI systems.
However, for Hawkins, the human brain wasn’t “designed”; it has evolved from lower forms. There is some hierarchy, but that is not the whole story. A fundamental “old” brain, which all lifeforms have to some degree, handles basic functions (movement, digestion, aggression, sex, etc.). A “new” brain, i.e., the neocortex, evolved because thinking is helpful for finding food, avoiding predators, and living long enough to procreate.
In fact, the main difference between the “higher” and “lower” order animals is the size of their neocortex (in Homo sapiens, it is about 70 percent of brain volume). Whereas the “old” brain is composed mainly of distinct organs with unique structures suited for different functions, the neocortex is composed of hundreds of thousands of “cortical columns” (each can contain thousands of mini-cortical columns). Although each is a very complex layered structure, are all pretty much the same structure, repeated many times.
Furthermore, the neocortex, once established, did not grow vertically (layered) from lower species to higher species but laterally (area wise) — hence the folding structure of our brains, where more material needs to be squeezed into a solid cranium whose volume did not grow as quickly. It’s as if it was easier from an evolutionary point of view to simply duplicate and add something that appears to work, rather than “invent” something new all the time. (It should be noted that these columns also exist in parts of the “old” brain. There is continuity in evolution at work here!)
So the fact that MRI systems show different parts of the neocortex lighting up when different mental functions are performed comes not from unique structures, but from the wiring between the parts of the neocortex — all of which are structurally similar. Hawkins gives many real-life examples of how a purely hierarchical structure for the mind cannot be explained by this type of physical neocortex structure and that something different must be found.
It has been known for some time that the brain is a memory prediction machine. Through constant learning, it makes a model of the world, instantaneously updating the model as the world changes in time and the senses deliver the changes to the brain. The same is true for higher-level concepts such as language, music, mathematics, etc. This model is the basis for our predictions, perceptions, and actions.
It has also been discovered that map-creating neurons exist in the hippocampus and entorhinal cortex in the “old” brain. In 1971, “place” cells were identified that permit the brain to recognize what things are. Then, in 2005, “grid” cells were identified that permit knowledge of where things are (with respect to a reference, of course). Similar types of cells exist in the cortical columns of the neocortex. The existence of these cells implies that brains can create reference frames of what things are and where they are with respect to the body.
The cortical columns provide the mechanism for this, creating reference frames that have retrievable knowledge stored in them. Furthermore, it was discovered that knowledge of an item is stored in thousands of complementary columns, but not necessarily redundantly since information is coming to the columns from different sensors via different neurons.
This leads to the “binding problem”: how do inputs from the different sensors combine into a singular non-distorted perceptive experience? Hawkins’ answer is that the different columns develop a “consensus” via “voting” neurons, who send information on their axons out to other columns. The excitations quickly settle to a stable configuration which is what we perceive.
Non-Intelligent AI
In part II, Hawkins discusses “machine intelligence.” He notes that what currently passes for AI has no real I (intelligence). To be intelligent, the machine must be able to learn continuously, not through endless training, as current systems require. His definition of intelligence is “the ability of a system to learn a model of the world” (reality). For this to happen, he proposes it must learn the way the human neocortex learns. Current AI systems are basically “one-trick ponies,” able to do only the task they were programmed or taught to do.
Hawkins also tackles the presumed existential threats to humanity that are frequently ascribed to future intelligent machines. This is not about bad people using AI to destroy civilization but the possibility of AI itself being a bad actor. He points out that the reason people can be bad actors is not because of intelligence (the neocortex) but because of adaptations in the “old” brain that evolutionary pressures created to assure survival for pro-creation of our genes. That is, humans have goals because the “old” brain created those goals. Intelligent machines will only have the goals we give them. It is up to us, the designers, to not give them human-like emotions. Unless we deliberately give our future AI machines those “old” brain goals, there is no reason to believe they will deliberately become bad actors.
Existential Risks and False Beliefs
In Part III, Hawkins discusses “Human Intelligence.” This, he believes, is where the real existential threats to human survival exist. For 3.5 billion years, life has been driven by basic evolutionary pressures: competitive survival and procreation. Human intelligence has allowed Homo sapiens to flourish and succeed. However, the recent rapid rise in technology and scientific discovery has led to significant existential risks, the most significant being nuclear war and severe climate change.
For Hawkins, these existential risks arise due to two fundamental systemic risks built into our brains. First are the risks associated with the “old” brain: the part of our brain that harbors many short-sighted behaviors that were useful for gene procreation/survival but not necessarily desirable for assuring the future of humanity. He asserts that human-caused climate change is primarily due to overpopulation and the amount of pollution created per person.
Second are the risks associated with the neocortex, notably the creation of false beliefs. These false beliefs are created because the brain can only know a subset of the real world, what we perceive, since our model of the world is not necessarily the real world itself. We live in a simulation that our brain has created over our lifetime and this model can be wrong. Furthermore, cultural memes, such as religious beliefs, can spread some of these false models because they have evolutionary advantages. These false beliefs can convince large segments of the population that climate change is not real, or that an afterlife exists where one will be spared if any calamity happens on this earth.
In the final chapters, Hawkins discusses various options put forth to either extend humanity’s reign or at least preserve our collected knowledge for future civilizations so that our existence will not be forgotten in the endless breadth of time. If you are a science fiction fan, you will have encountered most of them and Hawkins discusses their viabilities in great detail. I will leave that for the reader to discover.
All in all, A Thousand Brains, is a well-written, informative, mind-opening read. I highly recommend it.
P.S. It is on Bill Gates’ list of the best five books of 2021.
Winning the Lottery
May 8, 2022
By Harlan Garbell
You may not be interested in war, but war is interested in you.
—Leon Trotsky
Vietnam annual draft lottery
There is a genre of fiction called “alternate history” (sometimes referred to as “alternative history”). You no doubt have either read a novel or watched a movie or television program based on an alternate history of events. Some of these books or programs are very good, some not. But they all challenge us to see the world as it might have been had someone made a different choice, or had chance intervened to change the trajectory of human events.
As I have aged and looked back on my own life, I have often marveled at how things could have been so different had I made just one different choice. In many ways, your reading this article at this moment in time is really the culmination of innumerable choices, or choices not made, by both you and me over our lifetimes. Many of those choices were wise, some were not. Even so, here we are.
We also need to factor in chance. Hasn’t chance played a prominent role in your life, where luck, either good or bad, significantly changed the trajectory of your life? For example, think about how you met your partner or significant other. Did you meet in a class at college? If so, what if your future partner was able to get into that class only because someone else dropped out at the last minute? You get the idea.
However, that is really just scratching the surface of how we, at this precise moment, came to be who we are. We would also need to factor in the choices of both of our parents (biological and adoptive) during their lifetimes. Right? And why stop there? What about the choices of their biological parents, and so on. For example, what if your grandparents decided to immigrate to America in 1912 instead of Australia only because a cousin wrote a letter to your grandfather saying jobs were plentiful in the mills along the Mississippi River here in Minneapolis.
But even then you would need to factor in chance, or perhaps more correctly in your grandparents’ case, choices made by others over which they had no control. For example, what if your grandparents chose to come to America in steerage on the passenger liner Titanic in that year? As you may know from reading history (or seeing the movie), the captain of that ship (Edward J. Smith) made the foolish choice to maintain his current speed and disregard the danger of icebergs. So, in effect, those unfortunate souls who drowned could have included your grandparents and there would be no “you” to read this article.
I have always been fascinated by the intersection of individual choices with historical events, which are usually beyond the control of the average individual. And how that intersection of choice and chance can then set off a further chain of events leading to the present moment. For example, in my own life I made a very unwise choice to drop out of college during my sophomore year. I was not very happy with what I was doing and where I was at and thought I could always pick up my studies at a later time without consequences.
Why was this choice unwise? I made this decision in March 1965. For those of you who don’t remember (or are too young), that was the month the first U. S. combat troops (3,500 Marines) landed in South Vietnam on orders from President Johnson to enter into the ongoing war between the South Vietnamese government and local insurgents, the Viet Cong. That was also the month that the U.S. first started bombing North Vietnam in what was ominously called “Operation Rolling Thunder.”
Consequentially, the United States went from being a military advisor in a war to being at war. By dropping out of college, I had lost my student deferment, resulting in my receiving a letter a month or so later from the U.S. Selective Service informing me that I was now classified 1-A- “Available for Military Service.” So, bottom line, I cleverly managed to lose my student deferment from being drafted into the armed forces in the very same month that our country went to war. (Nice work, kid!)
In February 1965 the Selective Service inducted 3,000 young men into the Army. In March that number increased to 15,000 inductees per month. In July it more than doubled to 35,000 per month. I had planned on returning to college in September but now realized that those plans were in jeopardy. Johnson was not playing around; the U.S. was fully committed to defending South Vietnam from a Communist takeover.
Then came a twist in my fortunes. In July 1965 I was playing a pickup basketball game outdoors when I felt a sharp pain in my chest. The pain did not subside and I was barely able to make it to the nearest hospital emergency room. I was promptly examined and treated for a collapsed lung (pneumothorax). I remained in the hospital for a few days thereafter and at the time thought this was an unlucky break.
I went back to college in September but learned to my chagrin that I no longer was eligible for a further deferment. When I was inevitably called up for my draft physical in December of that year my lung had apparently not fully healed and I was classified 1-Y- “Qualified for Service Only in Time of War or National Emergency.” (Weren’t we at war already?) In any event, they didn’t call me back. I dodged a bullet – perhaps literally. My “unlucky break” in July wasn’t unlucky at all as it turned out.
But there’s another twist to this story. In 1969, while living in California, I received a notice in the mail indicating that I was now subject to the first draft lottery in this country since 1942. Men born from January 1, 1944, through December 31, 1950, would be called up based on the order of 366 days of the year written on small pieces of paper, put in a small blue capsule, dumped into a glass jar, and randomly selected. Your birth date determined your fate. On December 1, 1969, the piece of paper with my birth date on it was the 352nd pulled from the jar. The Selective Service never bothered me again even though the war dragged on for six more years. For all intents and purposes, I won the lottery.
I am sure all of you who are reading this account have similar stories about events in your lives. How choices you made in your life turned out differently than what you expected, or how “luck” or historical fate intervened to change the trajectory of your life. This can also be true if you did not make a choice when you had the opportunity. For example, lacking the confidence, or being too fearful, to seek something you really wanted – something you later regretted over the years.
That Vietnam story was just one of many episodes in my life that was life changing. Thanks to a pickup basketball game, and some lottery luck, I was saved from a foolish choice and did not become cannon fodder in a misguided, faraway war. However, that episode changed the trajectory of my life in more ways than one. I hadn’t really even heard of Vietnam at that time, but this event focused my attention on this terrible episode in our country’s history and radically changed my views on politics and my willingness to act on these views. This then led to a chain reaction of other choices in my life, leading me eventually to write this article you are now reading. But the immediate significance of that episode was that I survived when so many others my age did not, or came back horribly wounded physically or psychologically.
Anthropologists have determined that anatomically modern humans first walked this planet about 150,000 years ago. This is about 7,500 generations of our species. Think about the odds of all of your ancestors (paternal and maternal) being able to survive long enough despite ubiquitous famine, disease, war, pogroms, natural disasters, accidents, foolish personal choices, etc., in order to produce their latest living manifestation – you. Statistically, those odds are practically astronomical. So, if by any chance you’re going through a difficult time right now, just glance up from your laptop or tablet for a brief moment and consider this – you exist, you are alive. You are a winner of our species’ historical lottery.
Harlan Garbell is president of HumanistsMN.