Thinking machines think just like us—but only up to a point.
IN THE SUMMER of 2009, the Israeli neuroscientist Henry Markram strode onto the TED stage in Oxford, England, and made an immodest proposal: Within a decade, he said, he and his colleagues would build a complete simulation of the human brain inside a supercomputer. They’d already spent years mapping the cells in the neocortex, the supposed seat of thought and perception. “It’s a bit like going and cataloging a piece of the rain forest,” Markram explained. “How many trees does it have? What shapes are the trees?” Now his team would create a virtual rain forest in silicon, from which they hoped artificial intelligence would organically emerge. If all went well, he quipped, perhaps the simulated brain would give a follow-up TED talk, beamed in by hologram.
Cajal’s way of looking at neurons became the lens through which scientists studied brain function. It also inspired major technological advances. In 1943, the psychologist Warren McCulloch and his protégé Walter Pitts, a homeless teenage math prodigy, proposed an elegant framework for how brain cells encode complex thoughts. Each neuron, they theorized, performs a basic logical operation, combining multiple inputs into a single binary output: true or false. These operations, as simple as letters in the alphabet, could be strung together into words, sentences, paragraphs of cognition. McCulloch and Pitts’ model turned out not to describe the brain very well, but it became a key part of the architecture of the first modern computer. Eventually, it evolved into the artificial neural networks now commonly employed in deep learning.
These networks might better be called neural-ish. Like the McCulloch-Pitts neuron, they’re impressionistic portraits of what goes on in the brain. Suppose you’re approached by a yellow Labrador. In order to recognize the dog, your brain must funnel raw data from your retinas through layers of specialized neurons in your cerebral cortex, which pick out the dog’s visual features and assemble the final scene. A deep neural network learns to break down the world similarly. The raw data flows from a large array of neurons through several smaller sets of neurons, each pooling inputs from the previous layer in a way that adds complexity to the overall picture: The first layer finds edges and bright spots, which the next combines into textures, which the next assembles into a snout, and so on, until out pops a Labrador.
What computer scientists and neuroscientists are after is a universal theory of intelligence—a set of principles that holds true both in tissue and in silicon. What they have instead is a muddle of details. Eleven years and $1.3 billion after Markram proposed his simulated brain, it has contributed no fundamental insights to the study of intelligence.
Part of the problem is something the writer Lewis Carroll put his finger on more than a century ago. Carroll imagined a nation so obsessed with cartographic detail that it kept expanding the scale of its maps—6 yards to the mile, 100 yards to the mile, and finally a mile to the mile. A map the size of an entire country is impressive, certainly, but what does it teach you? Even if neuroscientists can re-create intelligence by faithfully simulating every molecule in the brain, they won’t have found the underlying principles of cognition. As the physicist Richard Feynman famously asserted, “What I cannot create, I do not understand.” To which Markram and his fellow cartographers might add: “And what I can create, I do not necessarily understand.”
This article appears in the June issue. Subscribe now.
Let us know what you think about this article. Submit a letter to the editor at firstname.lastname@example.org.