Dept of English, NYU
Reading through Carr’s article, I was particularly struck by the following passage: “Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.”
I found this moment pretty disturbing. Since our first class, I’ve grown more conscious of my screen time and the possible deleterious effects it’s had on my brain and thought processes (effects which, I’ve come to realize, can be felt viscerally). I found myself nodding my head in quiet understanding as Carr lamented his waning ability to concentrate on longer literary works. Previously I had accounted for this waning in myself as a mere decline in interest, as a possible shift in passions or a perfectly explainable boredom with school-assigned, antiquated texts. But, after reading Carr’s article, I really believe that my diminished ability to concentrate is an effect of my brain’s rewiring, a rewiring undergone by the Internet.
Last semester, for instance, I had to read a great deal of Bertolt Brecht for a theatrical theory class. Brecht’s writings, at least in my perception, tend to be dense and convoluted, aspects that really challenged my ability to maintain focus and engagement with them. Instead of taking my encounters with ambiguity as an “opening for insight,” I merely found them frustrating and, as a result, opted to meet difficult questions with the easiest, surface-level answers I could think of, all in the name of merely completing the assignment. Completing it was the priority, not growing from it.
As the Internet and the information mediums it encompasses inevitably expands, I worry that this mode of processing will as well, to such a degree that it, perhaps, becomes the sole mode. I worry that the human brain will largely become little more than a traffic controller of ceaseless streams of information, rather than the prospector and deep analyzer that it has previously been recognized as. Perhaps this fear seems distant from imaginable truth, however I express it knowing full well that our already gargantuan Netscape will only grow, which in itself is difficult to picture.
I agree with Nick Carr’s idea in “Is Google Making Us Stupid?” that the Internet is changing how we think, but I don’t believe that the change is necessarily negative. He laments his loss of ability to focus on long readings, but this alteration is likely a result of his own personal inability to concentrate and not the fact that the Internet can provide constant distractions. A person who is prone to distractions will find them regardless of the form that they come in; for example, someone who can’t focus will just as easily be distracted by a passing butterfly as a Facebook notification. In fact, people often learn to adapt to circumstances that they may have once found distracting. For instance, I once found New York City street noise to be unbearably loud when trying to do homework in my dorm room, but now I’m used to the constant honking and subway rumbles. Similarly, people who have grown up with constant Internet access on their phones will learn to tune out the temptations of Facebook, email, Twitter, and YouTube if they want to concentrate on reading a long piece of prose. Years ago, these temptations existed in the forms of television and radio. True, the medium of distraction has changed, but that does not alter a person’s ability to ignore outside diversions.
This idea is further reinforced in Jamais Cascio’s article, “Get Smarter,” in which he mentions that developments in technology have always significantly changed human culture and lifestyle because we learn to adjust to them. He points out that as the human population grows and becomes more intelligent, the systems that we rely on have naturally become more complex. However, I find his claim about brain augmentations becoming morally acceptable disturbing and hopefully untrue. To back this shocking assertion, he states that “on college campuses, the use of ADD drugs (such as Ritalin and Adderall) as study aids has become almost ubiquitous,” but as a person who actually presently attends college, I know that this is not true. Out of all the people I know in college campuses across the country, I have only heard two stories of study aid use and both ended on the note that the user never tried them again. I don’t doubt that there are some people who resort to such measures, but the claim that unnatural study aid use is “ubiquitous” seems far-fetched to me. With the growing popularity of the “organic” movement, I can’t see the majority of people accepting brain augmentations or other drugs/technological innovations that alter the state of one’s mind. Many products today are going the opposite direction with promises of fewer alterations, such as the removal of chemicals and synthetic materials in food, hygiene products (shampoo, facial cleanser, etc.), and pharmaceuticals (nutritional supplements, sleep aids, etc.). It disgusts me to consider accepting something as alarmingly invasive and unnatural as having a chip placed inside one’s brain or taking a pill that alters the way you think. This seems distressingly similar to drug usage, which also artificially alters your mind in a way that can foster an unhealthy addiction. Plus, what about the unknown long term effects of drugs such as modafinil? Without adequate long-term studies, no one can tell what effects or diseases the prolonged usage of modafinil can cause.
Thus, I believe that human adaptation to the ubiquity of the Internet is a positive development, without going so far as to claim that we should physically alter ourselves with technological “brain-augmenting” innovations. Clay Shirky in “Why Abundance is Good: A Reply to Nick Carr” put it best: “…our older habits of consumption weren’t virtuous, they were just a side-effect of living in an environment of impoverished access.” In other words, bemoaning the general population’s lack of interest in reading long works of literature is based on the idea that one’s enthusiasm for such works makes one superior, morally or intellectually, to people who prefer short blog posts or Internet videos. Works of literature that are profound and stimulating enough to withstand the test of time and remain popular with the general population will continue to do so, and those that are not may be bypassed as time goes on, and rightfully so. Even those generally unpopular works will continue to be studied and admired by historians, and perhaps even uploaded to the Internet so that nostalgic history-lovers everywhere can enjoy them. Thus, perhaps Carr’s friend should just accept the fact that he personally does not find War and Peace interesting anymore, and stop blaming the Internet for his own diminishing interest in Tolstoy’s work.
It would depend on how we believe the human brain processes information. Your senses are constantly taking in stimuli from every direction, whether you are consciously aware of it or not. There is no passive way to shut it off, short of locking yourself up in a sensory deprivation chamber. Obviously, very strong stimuli would unconsciously move to the forefront, such as blinding light or searing pain. But being online, there seems to be a lot of white noise such as emails, texts, and funny cat videos. The brain has always served as a sort of traffic controller, so the question would be how it goes about performing this task, when a million new people suddenly get their driver’s licenses.
If you think of it as a filter, tuning out extraneous information unimportant to the task at hand, then what would be the change? Is the sieve not as fine allowing the information you need to come through jacketed in a clump of useless garbage necessitating a secondary filter to process it? Is it a triage center, moving the information to the parts of the system that would process and handle pertinent tasks based on how critical they are? Is there a portion of your brain maintaining your online connections, while another portion takes in the book you are reading right at this moment? The current model for how your brain processes information might be like one of the above models, or something else entirely. But certainly it had a system to do so prior to the existence of mass media. And it will have a new system when we reach out next plateau in information distribution. I do not think the amount of things a person intakes from day to day in the pre and post internet era has increased that drastically. Would it be much greater than the time after light came in to common usage where humans could now become nocturnal creatures as well, opening up half a day cycle that was previously darkness that we instinctively feared? We have had large increases in the amount of information we intake long before electricity or even print. I find my thinking more in tune with Clay Shirky’s article. We discussed adaptability on Thursday, and although I know nothing of neuroscience, I think that we could potentially learn to process things more finely to take advantage of the wealth of resources.
Shirky’s article mentions, regarding Tolstoy’s War and Peace, “The reading public has increasingly decided that Tolstoy’s sacred work isn’t actually worth the time it takes to read it”. The book is organized in a manner that does not synchronize so well with the minds of the current era. But although it may not resonate with the minds of people from long after its date of publication, we certainly do not lose it. We can preserve everything, and lack of deep analysis of something now certainly does not preclude us from doing it at a later date. In return we have an opportunity to “rediscover” thinkers and artists who could speak to us, but perhaps were never popular enough to be widespread. We can choose where to focus. Things like the “like” button or recommendations from those who have consumed the same items can sort of provide you a roadmap to take your mind where it wants to go. Most importantly, unless the creator just refuses to make the material available in some form in a digital format, everything will be preserved, and nothing will be lost from here on in. I believe that to be worth the price of the information bombardment.
So we come to the question of how we can be discriminatory with what our eyes and ears are telling us. Why do we rank William Shakespeare above say Christopher Marlowe in terms of cultural significance? How something relates to a reader has always been a personal issue. You can only find out by being a tourist in the literature. Let it take you places you want to be or maybe places you are afraid to be. Partake in the food, including the things you think might be fried insects. There are more destinations to visit than one person can go to in a lifetime. Perhaps there is too much. If only there were some resource where you could find people with interests in the the same subject.
I too share Ben’s fear of humankind relying too heavily on artificial intelligence. For instance, I find that whenever I have a question or do not immediately understand something, my first instinct is to mindlessly plug that question into a search engine. I sometimes even get a little annoyed when my question is not answered within the first few searches that pop up (How does no one know the answer to my question? How could no one have ever asked that question before?). Most recently however, any instant gratification I receive from having my question answered within a mere ten seconds is quickly followed by a sad sense of disappointment as I realize the loss of opportunity to employ and develop my critical thinking skills. Relying on computers, on Google and other search engines like it, has no doubt stymied my inclinations to explore an inquiry through thoughtful experimentation. In today’s media-infused culture where anyone who is anyone is expected to always be “on,” I feel a certain pressure to always be in the know, a kind of social (media) Darwinism. It is a hard concept to be aware of, and an even harder concept to escape.
Having said that, I think that the “rewiring” of our brains is not necessarily something we should agonize about. In Carr’s article, he references Nietzche’s comment that “our writing equipment takes part in forming our thoughts.” It is interesting concept: how something will be read can affect how that something is to be written. Relating that idea to today, perhaps the fact that older, denser “literary” works were written for audiences without computers means that readers living a world with such technology are no longer wired to need or even want a longer writing form. Afterall, shouldn’t a writer always be mindful of his or her audience? For example, a work that was originally written as a play or musical to be performed on Broadway (for a theater-going crowd) does not always touch more mainstream audiences that would go to see movie version (see John Travolta in 2007’s Hairspray). Also, I feel that not being to able to finish a long article or book due to lack of concentration may just be a sign that we are learning how to filter unneeded information; what is important for our survival stays, and what is not is thrown out. Aren’t we just acting as good editors, using our mental red pen to cross out what we as the reader do not need? And if we do not need it (including for pleasure), then why are we taking the time to learn it?
Thinking of how the language of computers affects our brain, I can see now that my brain almost functions as its own webpage, with several ideas or words hyperlinked to other ideas or words. I read something, think of something else, then something else, each time opening a new browser. I eventually forget about the original thought I started on, but I cannot simply minimize the newly opened trains of thought as I can on a computer. In some ways this is good- I am forming connections between different ideas, learning to merge information highways between one thought and the next. I am experiencing the “intellectual vibrations those words set off within our own minds” that Carr attributes to deeply reading a printed work. However, the deeper in thought I go, the farther from home I am. I have lost my original identity, and I’m not sure I’m comfortable with that.
In order to solve this problem, I like what Dan said about how we can “choose where to focus” and find tools provided by social media that an help us explore what we find interesting. The example of Vishal in the New York Times article, “Growing Up Digital, Wired for Distraction,” demonstrates how technology can actually allow us to discover our passions. Although the article cites recent imaging studies of the brain during downtime, during which the brain can “develop the sense of self,” I find it interesting how Vishal finds his sense of self during a period of high stimulation. It makes me wonder if the rewiring of our brains is not the cause behind the lack in reading, but rather that the skills needed in today’s world are dependent on technology and our ability to navigate it. Ironically, technology demands that we are more educated in order to use it, yet keeps us from learning according to a past standard. A new standard must be forged, but I’m not exactly sure what that would entail.
NB: Much of what I’ve written below is an attempt at playing devil’s advocate … but certainly not all of it 🙂
The Carr piece to me reeks of egotism — of his own mind, but also of the “Western ideal.” Carr’s main criticism is of Google’s attempt to engineer a computer system smarter than the human brain, and then have it integrated with or subsume the brain; he characterizes his criticism with a sort of science-fiction tone, writing “Their easy assumption … is unsettling.” What Carr seems to fear (and his mode in the latter half of the article is one of fear — he calls himself a “worrywart”) is that, in Google’s increasing importance to the human experience, we are losing / will lose something good and important about what it means to be a rational thinker living in the Western world. All those ideals we are taught to value — sustained critical thinking, extensive and intensive reading, original thinking and writing — fly out the window when we begin to rely on a computer program for answers.
Carr’s argument, however, makes a huge assumption: That the above values, the sustained thinking and original ideas and individual contributions, are actually meaningful and something we should indeed value ourselves. Under this assumption, he places the post-enlightenment, Western mind on a pedestal. Carr quotes Richard Foreman as saying that the ultimate ideal is: “A man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West.” The entire heritage of the West? In my experience, Google is a better candidate for something which encompasses “the entire heritage of the West” than any human (Save for maybe Ken Jennings). What is scary and science fictiony and HAL-esque about Google is its lack of agency; its lack of what Foreman would call a “personally constructed and unique version” of the information it offers. Google offers no opinions or personality, only facts.
While the prospect of an opinion-less entity overtaking the human mind is scary to most 21st-century residents (including myself), this fear, I believe, comes largely out of our relatively recent valuation of individualism and unique thought. Looking back to the literature of only 500 years ago, the modern notion of a good idea being necessarily original to its author is completely absent. Rather, more valued was one’s place within a system, within a collective (Look at the importance of religious and court writings). So yes, Carr’s fear that we will lose this original and personal thought in the age of Google is a valid one. But his valuation of this originality may be less an objective truth and more a modern phenomena. In fact, resigning one’s agency to Google is not terribly different from the sort of collective-minded deference we have seen in myriad social groups throughout the past and present: Think of religion, nationalism. We can look to Google as a sort of religion / bible, to which people of the future will look for all answers (And, in most cases, find them. Answers based in fact, too, which is less than most religious texts can say). Letting Google overtake the mind is, in the way that devoting yourself to a religion or getting married is, sort of like dying. You give up your agency and choice and individual life / thought to something else, so that you might be able to either a) concentrate on other, more important things and / or b) be a part of some whole, something bigger than yourself. I think this prospect is scary to us 21st-century Americans to value our individualism, each an Army of One – but historically it’s a less scary future than Carr makes it out to be.
Throughout this weeks readings, I found myself intrigued by the relationship between Nicholas Carr’s “Is Google Making Us Stupid?,” Clay Shirky’s “Why Abundance is Good,” and the NEA’s 2004 study entitled, “Reading at Risk.” While Carr openly laments the decline in literary reading for which he partially blames the “swiftly moving stream of particles” of the Net (Carr, 2), the NEA affirms that “fewer than half of American adults now read literature” (NEA, Reading at Risk). Yet it is Shirky’s practical approach to solving an inevitable problem brought about by technological development that truly puts into perspective the manageable, embraceable nature of change.
In general, I can relate to Carr’s issue with “the Net […] chipping away my capacity for concentration and contemplation” (Carr, 3). I, too, have played witness to the ways in which the internet has effected my style of reading, writing, and learning. The National Endowment for the Arts may even prove his hypothesis to be true. Although the NEA states that “no single factor caused this problem,” I believe it is certainly possible that increased access to the internet has negatively affected reading habits and styles of American adults (NEA, Reading at Risk). If the brain changes according to the ways we use it, it is not difficult to believe that readers can no longer sit down to read Tolstoy’s War and Peace (Carr, 2). But still, I am unsatisfied with Carr’s lack of proposed solution and seemingly inconsequential musings on the future of artificial intelligence. Carr’s nostalgic tone and concession, “Maybe I’m just a worrywart,” gives his article an attitude of defeatism that I find neither sympathetic nor productive (Carr, 7).
Instead, I am drawn to Shirky’s reply, in which he states that “the real anxiety behind [Carr’s] essay: having lost its actual centrality some time ago, the literary world is not losing its normative hold on culture as well. The threat isn’t that people will stop reading War and Peace. That day is long since past. The threat is that people will stop genuflecting to the idea of reading War and Peace” (Shirky, 1). I find Tolstoy’s War and Peace to be a particularly appropriate example for this discussion. As a piece of literature, War and Peace is romanticized for its word count in a similar way that Carr romanticizes the lost days of literary reading. What Shirky brings to attention is that technological developments have always caused drastic changes in culture, and the new genres and media brought about by the internet is no different. To quote an elementary maxim- just because something, be it manner of concentration or genre of reading, is different, doesn’t always mean it is bad. Instead of fearing change and distracting from productive resolutions by hypothesizing on future developments, we must try to understand and negotiate “the greatest expansion of expressive capability the world has ever known” (Shirky, 2). As a society, we will only be able to do so by embracing new technologies and learning to harness the power of our evolving physiologies.
Before reading Carr’s article, my initial reaction to the provocative title “Is Google Making Us Stupid?” was a resounding ‘no.’ Surely, the wealth of information available on the Internet could only serve to enhance our daily lives and make us more knowledgeable, compassionate, and well-informed human beings. However, Carr’s description of his own struggles to concentrate on reading for extended periods of time – “deep reading” as he calls it – eerily mirrored my own recent struggles with reading for pleasure. While I have no problem reading a required text for class, I have significantly more trouble reading for pleasure, as I find it increasingly difficult to sit for extended periods of time immersed in a novel. I don’t believe Google has made me stupid; rather, I find my brain caught up in a transition from to a digitized world.
While Davidson offers a positive and optimistic evaluation of the advanced digital age, Carr is decidedly more hesitant. Where Davidson sees the brain as a malleable tool suited to tackle an increasingly busy digitized world, Carr sees a window for stunted growth, marked by his sudden inability to process complex information. I see Carr’s argument as a lament for the past, which I don’t find to be the most convincing or productive approach. While I agree that our brains are rapidly modifying to fit the digital environment (evidenced by shared difficulty in attentive reading), I don’t think that our capacity for information processing has decreased. The Internet, as well as our browser options, presents information in bits and pieces – hyperlinks, comment sections, multiple tabs, bookmarked pages. Our brains are moving at lightning speed to keep up the wealth of information available at the click of a button. What’s not to say that this particular form of information processing, while not particularly linear, has not fine-tuned our brains to handle an increasingly complex and interconnected world?
Throughout human history, our brains have kept pace with our environments. Jamais Cascio presents this idea quite eloquently, as his article “Get Smarter” praises the human mind for its unique adaptability. In looking to the past to predict the current trajectory of the digitally oriented society, Cascio reminds us of the limitless potential of the mind, which comes across as a soothing respite from Carr’s harsh evaluations of our present era.
This current transition period is anxiety inducing because we do not know yet where we are going. However, there is any tool known to mankind equipped to handle the complications of the present, it is the human brain.
The first thing I objected to in Nicholas Carr’s article “Is Google Making Us Stupid?” was the title. “Stupid.” What a relatively meaningless word, taken out of context. Stupid how? Now I know that, in order to get an argument across, sometimes you have to be a little dramatic. Specificity, however, should not be compromised, especially in an article that deals with, to an extent, the brain and its capabilities. If you’re going to take a scientific approach, for even a moment – if you’re going to bring in brain plasticity and the consequences of prolonged Internet exposure to the neural network – I’m going to need to know how, exactly, we’re defining “stupid.” Maybe (in this article, at least) it’s self-explanatory: loss of focus, divided attention, a broken link on the long chain from deep reading to deep thinking. If that’s so, I find it a disappointingly reductive measure of intelligence. And how convenient to pin all the blame for this perceived loss on emergent technology; because it’s new, because we haven’t yet explored all its many affects, who’s to say that it isn’t to blame? I can’t, of course, say that it isn’t – but other potential contributing factors seem to have faded to the background. Internet anxiety has pushed out more immediate anxieties; after all, how many problems can be traced back to a single source?
I’m automatically resistant to any article/opinion piece that tries to scare me into thinking a certain way; using a sort of monster-movie analogy (HAL’s sentience and the chaos it wreaks) to prove a point seems, to me, a bit much. It also fails to convince me of anything – Carr’s apparent vision of the future, of mindless, automaton humans and unstable artificial intelligence, can be attributed to nothing more than power of suggestion. While there is of course a chance that a HAL-like machine could one day be created, could then malfunction and meet a HAL-like end, we have no explicit reason to believe it will be so – except for, perhaps, an instinctive mistrust of things that aren’t outwardly “controllable.” We do know that the reality of artificial intelligence may not be far off, but does that necessarily spell disaster? Admittedly, I have no knowledge of how this A.I. will operate, but I feel secure that it will differ from its fictional counterpart in at least some of its functions. Because we have movies, television shows, and books that have taught us to be wary of future technology, wary of anything non-human that imitates human speech/mannerisms/intelligence, we shy away from these paths of progress. I’m not saying that we’re wrong to shy away; I can’t possibly know. I will, however, say that it’s a pretty sizeable leap from distraction and information overload to a machinated future of human subjugation.
That said, I understand, and can even appreciate, Carr’s evocation of 2001: A Space Odyssey. In a time when more and more of our media – the majority of our information, in fact – is digitalized, it’s hard not to look to those science-fiction doomsday prophecies. People still don’t understand the full implications of what this technology can do, let alone what it can or will be doing in even a matter of years. That, in and of itself, is a little frightening: fear of the unknown, fear of possibilities we can barely comprehend. Still, what should the response to this fear be? As Clay Shirky writes in “Why Abundance is Good,” to adapt to the great changes taking place “the one strategy pretty much guaranteed not to improve anything is hoping that we’ll somehow turn the clock back.” The momentum we’ve built up is either going to carry us forward, or bowl us over.
Cathy Davidson, Nicholas Carr, and Jamais Cascio are all grappling with the issue of technological change. They present us with well supported arguments, references to medical studies, direct quotes from world renown scholars, and fascinating cases of technological panic upon which to base our own opinions and ideas. Nonetheless, I have observed a recurring pattern or idea in each of their articles: the concept of adaptation. Surely, each author refers to and explains adaptation to technological changes in their own unique way. However, our ever changing surroundings must be accounted for and a plan delegating how exactly to cope with these changes must be implemented. Upon reading Cathy Davidson’s introduction to Now You See It: How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn and Nicholas Carr’s article entitled “Is Google Making Us Stupid?” I immediately took notice of each writer’s perception of change and adaptation. Carr believes that whether for better or for worse, our brains must adapt to the changes technology has presented us with—particularly the changes computers have presented us with. The computer is one form of technology, but it infiltrates the rest of our lives, replacing other technologies simultaneously. If it can dominate the clock, the calendar, and other monumental concepts or monumental technological tools, it can and will dominate our minds and our lives. In turn, a chain reaction is created: one dominant piece of technology can change our lives, our minds, our habits, other technologies, and even the media. Carr then reacts to these ideas by arguing that our brains must adjust in conjunction with the adjustments computers have invented. Carr summarizes, “Never has a communications system played so many roles in our lives—or exerted such broad influence over our thoughts—as the Internet does today.” Despite his fear of the exponential decrease of contemplation and deep thought, Carr, in these few paragraphs, introduces us to the concept of neural adaptation in response to the power of the computer. This could quite possibly be the answer to the problem of attention. If a computer plays so many roles in our lives, it is constantly consuming our thoughts. If it constantly consumes our thoughts, how can we focus on anything but a computer and all that it has to offer with just the click of a mouse? Cathy Davidson addresses a similar concept with different ideas.
I found Davidson’s argument to be the most fascinating. I almost interpreted her introduction as a call to action. She wants society to move with and adapt to the everyday technological changes we encounter and will continue to encounter for decades. It seems that both Davidson and Carr argue that when times change (and technology with the times) we are forced to change too. Whether or not these changes are positive or negative or beneficial is up to the reader to decide. Davidson argues that, historically, monumental technological changes must be greeted with institutional adaptations. Whereas Carr argued that neurological adjustments must be greeted with technological changes, Davidson argues for adjustments in educational institutions and work environments with each epic innovation. I think that although Carr’s article and Davidson’s book feature an array of critical ideas, they are both discussing the same concept and establishing foundations for improvement in attention—adaptation to change equals newer and better attention retention skills. Davidson’s new classroom or work structure would feature the pooling together of ideas and observations to create a team of specialized individuals working toward the same goal. Together, their specializations would create a mastermind, never losing sight of large ideas, as attention is divided and its members are more focused than ever.
Jamais Cascio’s “Get Smarter” proposes yet another opportunity for technological adaptation. Jamais’ article unleashed yet another new perspective and theme of adaptation: when our brains change in order to adapt to new conditions and now new technology, we might actually become smarter. Thus, if we adapt specifically to computers or to the Internet, this technology can brighten our thinking and ultimately make us smarter. In my opinion, it all comes down to what society considers to be more important: contemplation and deep thought (Carr’s article) or the absoportion of facts. Is it better to breed insightful thinkers or train our people to learn everything all the time on the Internet—to have everything at our disposal at any time. Either way, an adaptation and a plan is required to tackle such a feat. Perhaps Davidson’s ideas could help here: can’t we change the Internet so that it makes us contemplate and think deeply? Could we find a way to make computers change us for the better? Why not manipulate the technology that people fearfully argue is manipulating our brains?
As I started to read Nicholas Carr’s article “Is Google Making Us Stupid?”, I noticed that the style of writing was all too familiar. It was not the first article that we have read for this class that began with anecdotes from literary types explaining that they no longer believe they can read books or long articles. After seeing this similarity between this article and previous articles, I made the mistake of thinking that I would be hearing the same argument I had heard before, but I was mistaken.
Almost halfway through the article, Carr quotes Maryanne Wolf, a developmental psychologist at Tufts University, who states that reading is not an ability that humans inherently possess, but rather one that must be learned. I am sure to many of my classmates reading this article, this idea was not something they hadn’t already thought about, but for me, it made me rethink many conclusions I had come to about reading.
For starters, I always assumed that people stopped reading books because they found other forms of media that they preferred, be it video, audio, or some mix of the two. Whenever I heard people respond to my argument by saying that people are becoming less capable of reading, I assumed it was a feeble attempt at establishing reading as something done by the elite, or most intelligent among us. This article, however, completely changes my opinion.
Reading is indeed something we must learn. Perhaps it had never occurred to me because reading seemed like something humans are so capable of. However, regardless of how capable we are of learning a certain activity, the fact that we must learn it is of the up most importance. It means that we must continually practice the activity in order to remain proficient. For instance, humans possess the ability to play the piano, but if one were to stop playing the piano for an extended period of time, say 25 years, said person would not play as well as he once did. This observation is perhaps the most compelling evidence I have encountered yet that suggests that reading books and long articles is of significant importance.
The one aspect of Carr’s article that I did not agree with was his discussion of artificial intelligence. He seems to suggest that because Google is interested in creating something that is “smarter” than humans, they are somehow suggesting that this powerful artificial intelligence is intended to replace the need for human thinking. It seems clear to me that Google’s goal is still to help humans with searching, albeit for financial gain, and not to replace human thought with computational thought.
This article once again begins with the message of “I no longer have the ability to concentrate on long passages of text,” with Carr admitting to the fact that reading for a long stretch of time in one sitting has become nearly impossible for him. He stipulates that the internet is to blame, what with the rewiring of our reading habits to encourage skimming through information as well as reading and responding to short bursts of texts. I have been thinking about this for awhile, and, while I do find myself skimming over stretches of text from time to time, the length of time I spend reading in a given period hasn’t dwindled. When I was reading Room, for example, I was able to go through the whole book within two days. It wasn’t that I was simply skimming through the pages just to get the general idea of the story; I was engaged and wholly interested in what was written. I am unsure if my lack of dwindling ability is due to my being exposed to the various formats of the written word on the internet at a relatively young age, or if it simply because I am younger than the author in general. My environment could have also played a role; being born and raised in New York City as well as dealing with many talkative relatives at family gatherings, I have probably learned to tune out background noises quite well over the years. In any case, I do not share the struggle he seems to have.
In regards to the experiment on looking through journals and periodicals on the internet, I can’t help but feel that there are other possible reasons for such an occurrence. When people are researching something, they generally want to find the information most useful to them in the shortest amount of time necessary. Therefore, skimming through different sources of information to get a general understanding of what each piece is about is actually quite a useful skill. From experience, reading through ten pages of an article only to realize that you can’t fit it into your paper is quite frustrating. People want to know the point and move on from there.
I suppose, however, that this can be connected to the whole idea of “internet reading” as well as Carr’s (and seemingly a number of other’s) problem with concentrating on leisurely reading long textual works. We live in a society where information is easily accessible, thanks in large part to the internet. This, in turn, allows people to do a number of things in a relatively short amount of time. For example, I could read Carr’s article, then type this post up while listening to newly released music, and then after that, click through Amazon or some other internet vendor and buy a number of different products. Before such an outlet was invented, all of these activities would take longer to do; while reading the article might not differ so much in time, writing out a response by hand would take longer, and purchasing a given number of goods would require me to leave the comfort of my home and visit a number of stores until I found myself satisfied. My point, I suppose, is that the internet has aided in fostering an increase in “hurriedness.” More and more, it seems as though people are running around, frazzled, feeling as though they need to get a vast number of things accomplished and the time they have been given has remained static. People drive faster, multitask on an array of activities, and wear themselves out by the end of the day, causing them to possibly indulge in what may be considered “mindless” tasks (watching television, scrolling through Facebook, etc.). However, it also gives people a false sense of security; they may think that doing certain things will take no time at all, and so they hold out on doing those tasks. Because of how quickly something can be accomplished on the computer, the idea of procrastination becomes a more prolific occurrence. The thing is, if such people are proven right and are able to accomplish their goals in a timely fashion at the last possible moment, they will be encouraged to continue the practice, remaining in a loop. With these two different scenarios, a connection can be formed; when someone is reading something on the internet, either for work or for leisure, that person wants to quickly gauge the relevancy of what is being read in regards to the self. If it elicits a strong emotion (passion, anger, sadness, etc.), there is a better chance of the person reading it more thoroughly. If it doesn’t mean anything to the reader, then the reader disregards it and moves on until he or she finds something they care about, or at least interests him or her. Of course, it isn’t a guarantee that what the person winds up looking at is significantly stimulating to the mind.
I agree with Carr’s skepticism over the idea of being “better off” with an attachment to the world’s information. In the end, information is useless without the ability of interpretation. That is something computers don’t have the greatest handle on, and the only way for them to have a better handle on it is to program them to be that way. I would also imagine that, if one had access to such a large amount of information at once, the brain would become overwhelmed. In terms of learning, it is often said that studying a subject for a little while everyday allows better retention of that information. However, having access to potentially limitless information would probably leave you with retaining barely anything at all. The idea of everyone knowing everything is not one that I am fond of. People are increasingly sending information about themselves into the network, and things that people once kept more private are now available to any with a quick search. The idea of everyone being attached to all information makes me think of everyone, in a way, sharing a brain, which would mean the loss of the individual. That seems a bit strange, though, considering all of the social media sites there are now who’s sole purpose seems to be “look at me,” but at some point, people will simply just be looking at one another; yet, if everyone is looking at one another, what are they looking at exactly? We could possibly become creatures constantly searching for information, but the information being looked for and consumed would lack significant substance.
In the end, I am all for a database that has a great deal of information contained within it. However, there has to be a separation of humanity and machine. People need to interpret the information they are taking in, rather than simply absorb it. If people can think critically about things they read online, I don’t think the presence of Google or the internet in general would cause us to become less intelligent. On the other hand, with all of the obstacles and time constraints our society seems to employ, I can’t be sure that many people will take the time to think critically of what is presented to them.
I tend to agree with Nicholas Carr’s article “Is Google Making Us Stupid” as I believe that the infinite amount of information that is readily available on the internet has had an impact on our life and changed the way we think. However, just like Clay Shirky wrote in his article “Why Abundance is Good” I don’t believe that this change is necessarily as bad as it could initially seem.
Just like Carr, I used to enjoy reading long books, passages and articles. I would allow “my mind… [to get] caught up in the narrative or the turns of the argument” and “spend hours strolling through long stretches of prose”. Now, instead, “that [is] rarely the case anymore” as I loose concentration, “get fidgety” and “lose the thread… looking for something to do”. I found the concept of “looking for something to do” while you technically have something to do–like reading a book or article–particularly interesting, as it portrayed my own feelings of inadequacy and frustration. I want to focus on what I’m reading, but at the same time, I can’t. This upsets me; the fact that I would prefer to read a shortened summary whilst having the option of checking my emails, social media and favorite blogs makes me feel embarrassed and less accomplished. I fear that instead of growing more informed and intelligent, I am slowly but surely making myself dumber.
However, just a few weeks ago, I underwent a personal revelation. On my flight back to New York, I read two “classics” and rediscovered my love for actual literature. I remember savoring and getting excited by the language that was being used and could hardly put either book down. The writing was, without a doubt, more satisfying and enticing than what is proposed on blogs or shortened articles. Hence, I couldn’t help but ask myself: why don’t I read as much as I used to when I was in high school?
I am distracted. There is no doubt that the interweb is making us less focused us as it offers us an intimidating amount of information all at once. Realizing that I can still read integral pieces of literature and enjoy it made me experience a huge sense of relief. Nonetheless, it also allowed me to accept the idea that I read those books due of the lack of air-time Wi-Fi and therefore, due to a lack of distraction.
As Carr poignantly states, the internet is becoming “our map and our clock, our printing press and our typewriter, our calculator and our telephone and our radio and TV”. Everything we need has been conveniently put in one place so it’s to be expected that we are becoming more internet dependent by the day. Carr appreciates what the internet has done for us, as he admits that “research that once required days… can now be done in minutes”. However, he also seems reluctant to accept our future and recalls how he used to be “a scuba diver in the sea of words” who instead is now “zip[ping] along the surface like a guy on Jet Ski”. This metaphor is quite dooming, as it suggests that we have become more superficial human beings. What we are reading and learning is no longer complete and in depth, suggesting that our intelligence and knowledge is now only skin deep. In other words, it seems that Carr is afraid. He is scared that we will no longer choose to read an exhaustive, well written newspaper article because we have the option to read the ‘same thing’ on twitter… for less than 140 characters.
I am the first to admit that the internet is taking over our lives. However, more similarly to Shirky, I believe that we should embrace it, rather than condemn it. Shirky makes references to the printing press, explaining how “Carr is correct” and that “there is cultural sacrifice in the transformation of the media landscape”. However, he also points out how “this is hardly the first time that [this sacrifice] has happened”. The printing press brought about changes that weren’t well accepted, as it “sacrificed the monolithic, historic and elite culture of Europe by promoting a diverse contemporary and vulgar one”. Change is never perfect. It is disruptive and it takes a long time to appreciate and get used to. The interweb may not be even close to being perfect, but websites like Wikipedia, WolframAlpha, Google Scholar and JStor.org are undeniably helpful and educational. In fact, even websites that could be deemed as a waste of time–like Twitter, Facebook and Tumblr–have many positive qualities to them. This is why Shirky is correct in saying that “we must find ways to focus amid [this] new intellectual abundance” and find ways to exploit and use this unending source of information at its best, to “make the sacrifice worth it”.
“The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the ‘one best method’ – the perfect algorithm – to carry out every mental movement of what we’ve come to describe as ‘knowledge work.'”
Nicholas Carr’s portrait of the Internet is bold but too simplistic and one-sided. To me, the Internet can be roughly partitioned into two parts – that of productivity: the searching, aggregating and absorption of knowledge and that of recreation: Youtube, Flash games and so on. Carr seems to focus on the former through his likening of information processing to Taylor’s industrial manufacturing, neglecting the fact that we play and relax on the Internet. Certainly, both sides benefit from the Internet’s ability to automate information processing but efficiency is another matter. Efficiency means being able to cut through the abundance of the Internet to find exactly what we are looking for. When it comes to recreation, we are often not looking specifically for anything. How then does efficiency factor in?
Programmers are, as noted by Carr, continuously improving the efficiency of search. One aspect of this has been the emergence of filter bubbles (see Eli Pariser: Beware online “filter bubbles”). The algorithms that shape our information feeds and search results are becoming increasingly personalized. We become isolated in our experience of the Internet, we become trapped inside filter bubbles. This algorithmic tailoring means that the Internet is being shaped to increasingly mirror and extend our habits and thinking, just as we are acclimating to it. If we decide to skim multiple articles instead of reading them one by one, then the Internet will happily suggest more articles to skim. Just as well, those of us who read deeply and browse in a more leisurely rate will find ourselves in a slower-paced Internet. Carr’s premise is flawed because he sees the Internet as this monolithic machine that is making us stupid when in actuality, it is just amplifying our own characters.
FOR SOMEONE WHO CAN’T READ MORE THAN A FEW PARAGRAPHS OF TEXT AT A TIME, MR. NICK CARR CAN CERTAINLY WRITE AND EYEFUL CAN’T HE???
My immediate reaction to this article (and I’m halfway through) is that, since our brains (or Mr. Carr’s brain) have managed to evolve or adapt or adjust to this new way of thought and to this new way of reading and focusing and remembering – since our brains have changed in just a decade or so – then we can change them back. I am very annoyed with this article right now. I am very annoyed when I/others can identify problems with ourselves – especially if it’s not just a problem, but an actual trait we are upset by, as Mr. Carr seems to be – but just talk about the problems and seem to say “WELL, this is the way things are now! This is who I am! Let me ruminate all over this little issue and just never do anything about it, like actually alter my behavior and actively work to retrain my brain so that I can actually learn, remember, process, understand, and GAIN something from the hours that have been and will always be otherwise wasted becoming stupider on Google.” Anyway.
What is a gewgaw? Aha. That doesn’t seem fair, calling hyperlinks gewgaws. Someone is resentful. “It surrounds the content with the content of all the other media it has absorbed.” No, you do, you do by having Outlook open at all times, you do by having multiple tabs open (I have seven tabs on this window, nine in the other one. I also have iMail open and Tweetdeck binging at me and also my phone just vibrated, but I’m not looking at my emails and I’m not looking at my phone and I haven’t even taken Concerta yet, but somehow I’m alternating only between this tab and the “Is Google Making Y-” tab and just… I’m so frustrated. Mr. Carr, please). Be mindful. Cultivate non-distraction. I’m terrible at discussion comments.
If hyperlinks are so bad, why do you keep using them? Why are you cultivating the distracted reading habits you condemned at the beginning of this article?
“The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information…” I hate this. Okay, fine, the Internet, by its nature, is a machine of information storage (and thus a transmitter of information). Has no one ever heard of repurposing? So the Internet is meant to be this hyper-mechanized system of information collection, storage, and dispersion – so that’s what it is, but that doesn’t have to be how we use the Internet?? Just because the programming of the Internet/computers/whatever is very rigid, precise, speed-oriented … it just doesn’t mean that this is how we must also use the Internet. Buy the Internet a nice dinner. Ruminate on the philosophy of Buffy the Vampire Slayer for one, two, three hours. Decide you need time apart, go take a nap, come back and spend some quality time, just you and the Internet, over a nice bottle of Cracked.com. Don’t let craigslist.org or home.nyu.edu or theatlantic.com interfere. You and the Internet decided on one activity, don’t throw that away.
I don’t know. What I mean is that I just think there are very obvious solutions to all the issues it seems Mr. Carr is taking with Internet-usage, and even with the inherent traits of the Internet. Like, if this is such an unhealthy relationship, work to fix it.
Okay I hate this article. I finished and I hate it. Mr. Carr is not a luddite and I feel like it’s such a cop out for him to say, “Hmph well you might call me old-fashioned! But don’t say I didn’t warn you!! I’m just going to keep using the web the same way I always have tralala.” I mean, really? YES, undoubtedly the web is distracting us and making us worse people, at least in our capacity to remember (like that study that shows people don’t remember things they read on the web, they remember the site where the information can be accessed), our capacity to focus, our capacity to read for long periods of time and to read beautiful prose on paper and to read anything except abbreviations of abbreviations. This is the effect the web has when we aren’t mindful of our usage. I love coffee, but if I drink more than one cup then I have to pee constantly all day and if I drink way more than one cup then I turn into an awful, anxiety-ridden jittery hot mess and so I know that I have to drink coffee in moderation or else I turn into a horrible, anxious, raging addict. Queue metaphor for internet use.
I guess this is especially annoy because does Mr. Carr know how many neuroscientists are researching and very worried about this?? I mean, for someone who says his every-day experience is an Internet hyperlink k-hole of distracted doom, he very clearly doesn’t actually learn anything at all from what he’s doing, nor does he even use the web in an advantageous way that would sate his worry/curiosity. Like not only are people researching this, but people are publishing articles and journals and how-to/top-ten-ways-you-can-rewire-your-distracted-brain lists to help people cope with their now altered brain patterns and to help them learn about how plasticity works, and how one can change one’s habits in order to find the balance between Luddite and Internet Crack Addict.
In regards to the validity of what Mr. Carr is writing about, I very much agree. I took a semester off last Fall and spent two months in bed on the computer all day. I’m still going through the articles and sites I favorited in that time, because back then I wasn’t conscientious of the way the Internet had changed my ability to read the things I was interested in. I think Mr Carr is right in what he writes, but I think he blames the web too much. What he should blame is Control+F and humans’ cultivated text-scanning abilities, and he should blame our willingness to read the abstract in place of the study, and he should blame us and our inability to not take the easy route in our acquisition of information from the Internet. I’m not so much it’s the Internet exclusively changing us; I think our laziness and our own desire for speed speed speed is why the Internet has become a fantastical land of gewgaws and email twitter facebook cracked nytimes facebook OMG SALE ITEMS email email email gchat, when did it get to be one in the morning?
I guess what I’m trying to say is that the Internet is so flipping amazing I don’t know what to do with myself sometimes I just sit open-mouthed awing at all the amazing things that are stored in it… but it seems that Mr. Carr and I agree that, for all the good the Internet does, it’s also doing us a major disservice, and that disservice seems to be the Internet’s ability to process information for us. We no longer have to read, contemplate, interpret, understand, and draw our own conclusions with the Web. Now I Google “what was the intent of the final lines of The Magus” (“cras amet qui numquam amavit quique amavit cras amet”) and boom, the answer (and hundreds of answers) are there. The Internet facilitates a different kind of discovery than we are historically used to. We now discover information that already exists, rather than producing new information ourselves (for the most part).
And the problem I have with Mr. Carr is that in this article, he embodies the current (but not permanent) Spirit of the Web: he collects information from other sources and brings it together into his one article, but he doesn’t produce anything new. He discovers but he doesn’t create new information. He provides no solutions, no answers, he only scoffs at those who might call him old fashioned. I don’t call him old-fashioned, I (the godless, self-absorbed, self-indulgent, self-loving twenty year old that I am) call him absolutely modern – he is entirely a product of today’s tech-heavy, Internet-obsessed culture, from the top of his distracted head to the tip of his unoriginal fingers. I guess it’s our job, as readers, to identify with what he’s saying and then to make our own changes to stop the distraction. I don’t know. /discussion
Nicholas Carr’s notorious essay “Is Google Making Us Stupider?” implicitly and explicitly suggests that Carr reads the brain’s malleability, this neuro-plasticity of the 21 C, as a kind of human tragedy. Woe to us all if we cannot return to the days when grey haired and bushy eye-browed men poured of tomes of Tolstoy from the comfort of their tweed jackets and rocking chairs in a cozy cabin somewhere near Exeter, NH! Carr quotes from a fellow decrier, the playwright Richard Foreman who writes nostalgically of, “A man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West.” I find this quote peculiar for admiration as it privileges two forms of knowledge and ultimately power that occupies a problematic place within the history of accessibility. I read this hypothetical man (might it be safe to say, in the history Carr/Foreman invokes, the inclusion of a woman is idealistic… and such a history void of sex-gender based discrimination is a ‘personally constructed’ history) as limiting and bordering such a heritage to a kind of self that is presumably not reflective of vast populations of this earth. And the celebration of a personally constructed heritage of the West seems to be an adorning of the myths and narratives that have propped up many a colonial and ethnocentric project.
If Maryanne Wolf’s aphorism that, “We are not only what we read, we are how we read,” proves true, than the Eurocentric vessel of knowledge Foreman romanticizes shapes and forms that very knowledge. This seems to be a later regurgitation of the Marshall McLuhan adage, “The medium is the message”. When certain folks hold an exclusive knowledge, than such knowledge grows tailored to its holder. Clay Shirky argues that Carr’s obsessive desire to preserve the milieu where War and Peace was hardily enjoyed is futile. I find this kind of delusional task, reinstating War and Peace as a societal must-read, analogous to the N.E.A. definitions of participating in civic life as attending operas, ballets, jazz concerts and visiting museums. The N.E.A. undertakes a problematic task as they navigate the perilous terrain of attempting to categorize literature. Such a practice is fraught with canonic delusions and limited narratives, much like Carr’s move to classify what was lost to the Internet Age as somehow superior to the 21st C understanding of literature.
Shirky, in his rebuttal of Carr’s piece, speaks of how War and Peace, “is one of the longest novels in the canon, and symbolizes the height of literary ambition and of readerly devotion.” Shirky’s understanding of War and Peace as a kind of trope is accurate, but the more political baggage of that trop doesn’t appear in his argument. Shirky is quick to point out that the printing press opened up the art and act of reading to unprecedented circles of people, but our own knowledge of history informs us the effects of widening are still unfelt in many communities. While it’s naïve to insist that the internet’s democratizing force will eradicate illiteracy and misinformation, at least it is vessel that’s essence moves towards such an ideal. I find Carr’s nostalgia for pure, deep reading frighteningly resembles a nostalgia for pure, deep reading for certain individuals.
So far in this class we have read a lot of differing views on where technology is leading us and whether or not technological progress should be seen as essentially good or bad when it comes to how we are using our brains. While a lot of the authors address these issues, few have any concrete ideas about what needs to happen next. What does worrying about these issues matter if we can’t figure out what to do about it? And yet, after reading these articles and discussing them extensively in class, I can’t do any better than to say that I really have no idea what needs to happen next. All I know is that, like Cascio says, “the information sea isn’t going to dry up,” so there is no use in just sitting on our asses and lamenting the fact that it’s too hard to read War and Peace these days (I wouldn’t know, I’ve never read War and Peace).
From the anxiety present in Carr’s “Is Google making us stupid?” to the idea that someday we will essentially become hyper-intelligent cyborgs, the articles that we have read each present us with at least a few ideas and opinions that resonate with our own experience. A lot of people in this class have identified with Ulin, Hari, and Carr’s shared anxiety about how it can be more difficult to concentrate on a book in our world of constant connection and ever-present distraction—I know I do. But as people who have spent the majority of our lives with the luxury of having access to technology that makes our day-to-day lives easier we also find it next to impossible to imagine being separated from this technology.
One of the things that surprised me when I looked at my own habits was just how much I use technology as an English major. I previously thought that I could keep technology and the reading I do for class separate—aside from the obvious use of Microsoft word for writing papers—as most of the texts I read are actual, physical books that I can underline and scribble all over as I see fit. However, I found that I always read with my iPhone on hand, not because I’m texting or anything like that (friends have been and always will be a distraction from school work, whether simply texting or actually distracting you in person) but because I need a dictionary nearby and the only one I have is digital. Not only is the web the only access to a dictionary that I have, it is also infinitely faster than using a physical dictionary. It makes the reading I do for class a lot more seamless because of its quick information retrieval. There is, or course, the distraction of having the option to quickly check facebook before returning to my book, but whether or not I do so is really just a matter of self-control.
I don’t really know where to end here because I don’t even know exactly what I’m trying to say. Maybe I’ll just end by suggesting that one of the solutions to the “problem” of increasing distraction in today’s environment is to take an honest look at our own habits and decide what we think needs to change for ourselves. This doesn’t by any means solve the issue of how we should reform the education system—a monumental and complex task—but it might help us on a personal and individual level, which can be useful in its own way.
Fill in your details below or click an icon to log in:
You are commenting using your WordPress.com account. ( Log Out / Change )
You are commenting using your Twitter account. ( Log Out / Change )
You are commenting using your Facebook account. ( Log Out / Change )
You are commenting using your Google+ account. ( Log Out / Change )
Connecting to %s
Notify me of new comments via email.
Notify me of new posts via email.