Google, Artificial Intelligence, and What Makes Us Human

This post is co-written with John McLean*

Nicholas Carr’s The Shallows: What the Internet is Doing to Our Brains has received quite a bit of attention over the last two years. Released in June 2010, it was an international bestseller almost instantly. It was a finalist for the 2011 Pulitzer Prize in general nonfiction. To date it’s been translated into 23 languages. In the meantime, much has been made of Carr’s suggestion that the internet, as well as internet-based technologies, are changing our brains (somewhat for the better, but mostly for the worse). Author and journalist (not to mention undergraduate neuroscience major and Rhodes Scholar) Jonah Lehrer offered an early critique that set the debate for many subsequent commentaries, suggesting that Carr neglects to mention “that the preponderance of scientific evidence suggests that the Internet and related technologies are actually good for the mind.”

We’re not inclined to weigh in on the issue of Google’s ability to increase activity in your dorsolateral prefrontal cortex – that’s better left to the neuroscientists and the Rhodes Scholars. But while the debate about internet-based technologies and the plasticity of the brain rages on, we find ourselves drawn to Carr’s comments on Google’s quest to master artificial intelligence. Carr rightly says that we are so accustomed to describing our brains in computing terms that “we no longer even realize we’re speaking metaphorically.” This reduces intelligence (and artificial intelligence) to “a matter of productivity – of running more and more bits of data more quickly.” Google’s founding CEO, Larry Page, feeds this sentiment with statements like this one from a 2007 speech: “when AI happens it’s going to be … not so much clever algorithms but just a lot of computation.” But what does this kind of approach to intelligence say about how we understand ourselves as humans? To put it another way, if the quest to build AI requires a computational understanding of the brain, then what kind of brains are we “re”-creating through AI?

If we continue to think of the brain entirely through computing metaphors, then the quest for AI will be valid just to the extent that it is able to feed that metaphor back to us – i.e., to the extent that the computer can render brain function in algorithms and data structures. We share Carr’s worry that this results in a “pinched conception of the human mind.” More worrying is the fact that we don’t even have to achieve full artificial intelligence to suffer its blowbacks on our imaginations; as Carr notes, we’ve already displaced every competing metaphor for brain function.

In a similar vein, Sam Harris, in a TED talk from 2010, observed, “Once you admit we are on a path towards understanding our minds at the level of the brain . . . we are inevitably going to converge on that fact space.  So everything is not going to be up for grabs.” He goes on to suggest that we will soon be able to use brain scans to make data-driven judgments about happiness, love, freedom, etc. across cultural divides. Some might worry that the ability to produce data of that sort would tempt us to exile (or worse, eradicate) any whose brain scans don’t meet certain standards. That feels a little too distant, or at least a little too much like science fiction, for us. Our worry is more about how even the possibility of this technology impacts our understanding of what it means to be human. In short, I am free to assume that any disagreement that I have with someone else (on whatever topic) is simply a product of his or her underdeveloped prefrontal cortex. I don’t need the data in hand to justify this judgment; I don’t even really need to know what the prefrontal cortex does. I just need  to believe that neuroscience will substantiate my superiority soon enough. As with artificial intelligence, we don’t have to realize the technology fully in order to suffer its blowbacks. As long as we’re trapped in a computational model of the mind, it’s easy for anyone (and everyone) to assume that disagreements and ambiguities are byproducts of deficient hardware.

Carr’s primary worry over these developments in our understanding of intelligence is the loss of quiet – in Google’s world, “there’s little place for the pensive stillness of deep reading or the fuzzy indirection of contemplation.” We suppose our concerns are less nostalgic for silent spaces than they are anxious about the erosion of genuinely human communities. The more we learn to think of our own brains as computers, the more we will see others in the same light, and this erodes our understanding of ourselves social creatures. After all, computers don’t “talk” to each other. (Of course they transmit data, but that’s not what we mean.)

On this point, Carr is spot on: the more we think of ourselves as computers, the more “ambiguity is not an opening for insight but a bug to be fixed.” As the quest for artificial intelligence continues, we’re not quite ready for the ambiguity bug to be fixed. It’s too important to what makes us human.

*John McLean is researching personal computing and Christian Ethics as a Summer Fellow at The Kenan Institute for Ethics at Duke University. He is a rising senior majoring in religion at Duke. You can follow his posts throughout the summer at the Kenan Summer Fellows blog.

This photo, by Dierk Schaefer, is featured here in accordance with its Creative Commons license.

5 thoughts on “Google, Artificial Intelligence, and What Makes Us Human”

  1. Yes thats reality like occur in our daily lives, What the Internet is Doing to Our Brains

  2. Sounds like an interesting book. However, I think the flaw here is specifically looking at Google as the culprit – it’s a search engine. It’s the internet in general that could support this theory. However, I tend to disagree. I think the internet has helped us all.

    – Phil

Comments are closed.