In his essay ‘MyBrain.net’, Geert Lovink pointed out a “neurological turn” in recent Net criticism, referring to a slew of authors who “cleverly exploit the Anglo-American obsession with anything related to the mind, brain and consciousness”. One of the most celebrated and respected of these authors is Nicholas Carr, whose books Does IT Matter (2004) and The Big Switch (2008) have proven him to be a particularly well-informed and lucid analyst of contemporary tendencies in information technology, economy and ecology. His new book, The Shallows: What the Internet Is Doing to Our Brains grew out of his widely discussed essay ‘Is Google Making Us Stupid’ (2008) (see earlier post) and is basically an account of his research into the impact of new technologies on human cognition. Drawing on the work of Marshall Mcluhan and Joseph Weizenbaum, amongst others, Carr starts out from the idea that an honest appraisal of any new technology, or of progress in general, requires a sensitivity not only to what’s gained, but also to what’s lost. “We shouldn’t allow the glories of technology to blind our inner watchdog to the possibility that we’ve numbed an essential part of our self”. This skeptical premiss leads to an equally wide-ranging and well-written account of the interplay between technology and the mind, in which Carr freewheels between the likes of Friedrich Nietzsche, Alan Turing and Michael Merzenich, between findings in computer engineering, clinical psychology and neuroplasticity. Carr argues that the tools we use to extend or support our mental capabilities – to find and classify information, to formulate and articulate ideas, to share know-how and knowledge, to take measurements and perform calculations, to expand the capacity of our memory – have a huge influence on our mental habits. Insofar that recent discoveries in neuroplasticity indicate that these tools actually give shape to the physical structure and workings of the human mind. We become, neurologically, what we think (which brings into harmony two philosophies of the mind that have for centuries stood in conflict: empiricism and rationalism). These discoveries are all the more important now that we seem to have arrived at an important juncture in our intellectual and cultural history, a moment of transition between two very different modes of thinking: from a linear thought process to an intellectual ethic of browsing and skimming, from the world of the page to the world of the screen. “By reducing the cost of creating, storing, and sharing information, computer networks have placed far more information within our reach than we ever had access to before. And the tools for discovering, filtering, and distributing information developed by companies like Google ensure that we are forever inundated by information of immediate interest to us, and in quantities well beyond what our brains can handle. As the technologies for data processing improve, the flood of relevant information only intensifies. More of what is of interest to us becomes visible to us; Information overload has become a permanent affliction, and our attempts to cure it just make it worse”. The more we use the Web, the more we train our brains to be distracted – to process information very quickly and efficiently but without sustained attention. Every time we’re surfing the waves of the WWW, we are plunged into an “ecosystem of interruption technologies” (Cory Doctorow). Certainly, some cognitive skills are strengthened – mostly related to certain kinds of fast-paced problem solving, particularly those involving the recognition of patterns in a welter of data (see for example the much cited study of video-gaming in ‘Nature’ magazine) – but these new strengths in visual-spatial intelligence go hand in hand with a weakening of our capacities for concentration and contemplation. As we, ever restless informavores craving for our Net fix, are skimming the Web, as quickly as our eyes and fingers can move (apparently our eyes skip down pages in a pattern that resembles roughly the letter F – “F”, wrote Jakob Nielsen, for “Fast”), we revert to being mere decoders of information. As we are juggling with several mental tasks at the same time, we become “mindless consumers of data”. Loud statements indeed, but Carr is well-aware of their sour resonances: he is cautious to avoid taking the neoconservatist roads the likes of Andrew Keen are exploiting, frames the current paradigm shift in a broad historical account of intellectual ethics, and draws upon a wealth of scientific studies, indicating how scholars of the mind believe, or at least worry, that our use of digital hypermedia is having a deep influence on our ways of thinking. Carr fully acknowledges that it’s neither possible nor preferable to “rewind” or “unplug”, but aims to build up an understanding, or at last a consciousness, of the changes in our patterns of attention, cognition and memory as we’re adapting to a new information environment, and what these changes might imply on a long term, in our culture and society. We cannot deny that a new intellectual ethic is taking hold – an ethic its inventors couldn’t have foreseen – but while we are swiftly and eagerly adapting to the circumstances, we should also be aware of the price we pay to assume technology’s power. Carr quotes movie critic David Thomas, who once observed that “doubts can be rendered feeble in the face of the certainty of the medium”. Thomas was writing about cinema, but is also applies, perhaps with even greater force, to the Net.
Arguably the most satisfying pieces in The Shallows are the ones in which Carr engages with information politics and control, notably ‘the church of Google’ chapter, in which he (drawing on material that earlier appeared in the article ‘The Google Enigma‘) reveals Google’s strategy as an extension of Taylorism. “What Taylor did for the work of the hand, Google is doing for the work of the mind”. Isn’t it remarkable, Carr asks, how the intellectual ethic of Google is mirrored in Taylor’s basic concepts of scientific management, as written down by Neil Postman in his book Technopoly.
“These include the beliefs that the primary, if not the only, goal of human labor and thought is efficiency; that technical calculation is in all respects superior to human judgment; than in fact human judgement cannot be trusted, because it is plagued by laxity, ambiguity, and unnecessary complexity; that subjectivity is an obstacle to clear thinking; that what cannot be measured either does not exist or is of no value; and that the affairs of citizens are best guided and conducted by experts”
Carr describes how Google relies heavily on cognitive psychology research to further its goal of making people use their computer technologies more “efficiently”. He quotes Marissa Mayer, Google’s Vice President of Search Product and User Experience, saying that “on the Web, design has become much more of a science than an art. Because you can iterate so quickly, because you can measure so precisely, you can actually find small differences and mathematically learn which one is right (…) You have to make words less human and more a piece of the machine”. For Google, information is above all a commodity: anything that stands in the way of its collection, dissection, and transmission is a threat not only to its business but to the new utopia of cognitive efficiency it aims to construct on the Internet. But Google, as the supplier of the Web’s principal navigational tools, also shapes the relationship with the content it serves. “Google is, quite literally, in the business of distraction (…) Every click we make on the Web marks a break in our concentration – and it’s Google’s economic interest to make sure we click as often as possible.” And as fast as possible: after all, “the colonization of real-time”, as Lovink puts it, has now also become one of its main concerns. Citing Twitter’s achievements in speeding the flow of data, Larry Page (co-founder of Google) said that his company wouldn’t be satisfied until it is able “to index the Web every second to allow real-time search.” This quest is not only the driving force behind the Google Wave service (which, as one commentator has stated, “turns conversations into fast-moving streams-of-consciousness”) but also the prime reason why Google has recently revamped its search engine. The quality of a page – as determined by the links coming in – is no longer the chief criterion in ranking search results – as it turns out, it’s now only one of the 200 different “signals” that the company monitors and measures. No, the priority lays now on what it calls the “freshness” of the pages it recommends. “Google not only identifies new or revised Web pages much more quickly than it used to but for many searches it skews its results to favor newer pages over older ones. In May 2009, the company introduced a new twist to its search service, allowing users to bypass considerations of quality entirely and have results ranked according to how recently the information was posted to the Web. A few months later, it announced a “next-generation architecture” for its search engine that bore the telling code name Caffeine”. In June this year a post on the Google Blog mentioned that the system was completed. Significantly, it provides “50 percent fresher results for web searches than our last index, and it’s the largest collection of web content we’ve offered.” And so Google continues to expand its hold on Web users and their data, by diversifying its services and further colonizing all types of content. “For Google, everything that happens on the Net is a complement to its main business: selling and distributing online ads.” Most of its services (like YouTube) are actually not profitable in themselves, but they enable them to collect more information, to funnel more users towards its search engine, and to prevent would-be competitors from gaining footholds in its markets. Nearly everything the company does is aimed at reducing the costs and expanding the scope of Internet Use (that’s why they want information to be free). “Its ideals and its business interest converge in one overarching goal: to digitize ever more types of information, move the information onto the Web, feed it into its database, run it through its classification and ranking algorithms, and dispense what it calls ‘snippets’ to Web surfers, preferably with ads in tow. With each expansion of Google’s ambit, its Taylorist ethic gains a tighter hold on our intellectual lives.”
No matter if or how long Google is able to maintain its dominance over the flow of digital information, its intellectual ethic will remain the general ethic of the Internet as a medium. It’s an ethic of informality and immediacy, which might lead to a narrowing of expressiveness and a loss of eloquence (“writing will become a means for recording chatter”). It’s also an ethic, as the Google Books project (what Mayer calls its “moon shot”) makes clear, that involves fragmentation and decontextualisation. The books Google is digitizing are not being considered as self-contained literary works but as piles of data to be mined. “It’s not a library of books. It’s a library of snippets.” A result of this “slice and dice” strategy or what economists call the “unbundling” of content is, according to Carr, that “we don’t see the forest when we search the web. we don’t even see the trees. We see twigs and leaves.” “What we’re experiencing is, in a metaphorical sense, a reversal of the early trajectory of civilization: we are evolving from being cultivators of personal knowledge to being hunters and gatherers in the electronic data forest.” The Net seizes our attention only to scatter it. As attention (“a ghost inside the head” says developmental psychologist Bruce McCandliss) is really the key for memory consolidation (and long-term memory can be considered the seat of understanding) the impact is considerable. Memory now functions as an index, pointing us to places on the Web where we can locate the information we need at the moment we need it. Socrates might have actually been right all along when we warned for technologies that would “implant forgetfulness” in the mind, providing “a recipe not for memory, but for reminder”. Those who celebrate the “outsourcing” of memory to the web or the notion of “perfect remembering” (see also previous post) have been misled by the ubiquitous metaphor that portrays the brain as a computer (typically the post-internet conception of memory, in which artificial and biological memory come together). Carr explains how the old botanical metaphors for memory, with their emphasis on continual, indeterminate organic growth, are really remarkably apt: biological memory is in a perpetual state of renewal, and furthermore, the brain never reaches a point at which experiences can no longer be committed to memory. But when we’re facing many information faucets at the same time – like when we’re power-browsing – it becomes harder to retain information or transfer it to our biological memory. “Our brains become adept at forgetting, inept at remembering”, what might lead to a “self-perpetuating, self-amplifying loop”: the more difficult it gets to “process” information and weave it into conceptual schemas, the more we’re dependent on the Web’s information stores. Conclusion: more information can mean less knowledge. The tendency to automate cognitive processes and create evermore user-friendly interfaces, embedded in most contemporary software tools, reinforces this cul-de-sac. Whenever we’re externalizing problem solving and other cognitive chores, we reduce our brain’s ability “to build stable knowledge structures”. Whenever we use tools to sift information, we frame our thinking (sociologist Robert Evans has noted that the automated information-filtering tools, such as search engines, tend to serve as amplifiers of popularity, quickly establishing and then continually reinforcing a consensus about what information is important and what isn’t). Whenever we go online we are following scripts written by others, and we’re routinely and passively going through the motions. “The effects of technology do not occur at the level of opinions or concepts”, McLuhan wrote. Rather, they alter “patterns of perception steadily and without any resistance.” But perhaps there are ways of resisting, without resorting to nihilist strategies of “escaping” or “taming” the Net. Perhaps there are ways of countering the “terrors” of immediacy and informality, by designing alternative ecologies, “technologies of debate” (Bernard Stiegler), or a “contre-arsenal” (Paul Virilio). What we might need, to follow Stiegler’s reasoning, is new techniques of perception. Which reminds me of a chat I recently had with a political analyst working for a government agency, who was talking about the “thinking” software that has been installed to analyze the wealth of incoming data. “The results are basically correct, there is not one I could find that is incorrect”, he said, “but then again, they are all missing the point.”