[Apologies: This is a long academical essay that is intended to be the last time I write on a topic that has bothered me for years. It is a little heady, but I hope it's comprehensible.]
A colleague asked me what I thought about the current generation of college students. Now, every generation is “a generation of vipers” (Mt 23:33), and a fun game is going through Bartlett's Familiar Quotations or the like and searching out “young people” or “today's youth” quotes. No elder generation is particularly pleased with the redecorating the new kids do. A wise person, therefore, takes care not to cry wolf. When Douglas Coupland and the rest started up their talk of “Generation X” and the like, I thought it was hooey (especially since that 'generation' began three years after I did). I observed that growing up with a computer did not seem to make much of a difference. In fact, I was disappointed in how small a difference it made in folks who were in their teens in 1984 (I was past twenty). Since my life had been a series of unfulfilled promises of future innovation and Utopia, I was attributed generational changes to yet another floor in the tower of human folly. Until, in fact, about five years ago, I have thought all of the “generation this'n'that” was bunk, but I'm not entirely sure now that there isn't a big, big change coincident with generation.
First, the talk is proliferating, but it is also generating actual research. “Millennials” are a group that is not merely being studied, but is actually showing cognitive and behavioral differences from prior groups. Second, the changes that we're seeing are not merely in them, but rather are more acute and pronounced among them. In other words, whatever it is that makes this generation so odious to traditionalists, it is all over the environment and merely concentrated in the young. The young are unmediated and unreflexive (lacking “meta,” if you will).
Nicholas Carr's The Shallows: What the Internet Is Doing to Our Brains is a provocative, and I think essentially true (if premature), summary of one facet of the generation. Too few people read the book, or too few took it to heart. Carr is examining purely the informational and cognitive effect of the Internet. If we imagine that a topic can be mapped spatially, we could think that it would have, at the top, “apple trees,” and then “uses for,” “history of,” “diseases and pests,” “types,” “cultivation,” and a number of other topics that are one level below. Each of those beneaths has below it a tree of other topics. This brachiation continues until the information is a taxonomy that is familiar to all of us who grew up in the paper book age. It is the Linnaean system, after all.
The Internet has squashed the hierarchy of knowledge into a single search line. The web has, Carr argues, made knowledge simultaneous and eliminated the capacity of learners and thinkers to hold taxonomies. We do not develop depth, in other words, because our information is all planar. Additionally, he focuses on the myths of “multitasking.” Multitasking is supposed to be a compensatory skill that allows for dual attention, like the superior aliens in Arthur C. Clarke's Childhood's End who could read a book and have a conversation at the same time. However, neurologists report that no one multitasks. In fact, “millennials” are worse at multitasking than elders, and all they do is, as we used to say when denigrating Windows 3.1, “task switching.” There have been confirmations since that Stanford study in 2009 as well: swapping tasks (cell phone, driving; phone, class, notes, chat) means doing several things very, very poorly. What's more, the younger the person, the more the person believes his or her performance is excellent. Thus, as Carr argues, the interconnections are creating a flat, unprocessed, and unremembered and therefore unprocessed cognition.
My colleague's question was along a different line. He had noticed something else. He said, “The i-phone, i-pad. . . the I is the important thing.”
He's not entirely wrong, but it would be a terrible mistake to retreat to the comfortable shell of the cultural conservative's charge of egoism. Let's leave that for George Will's monomania. Saying things like that is slightly more useful than throwing a spear into the ocean: the tide is coming in anyway (a reference to Caligula, not Cnut).
The reason that I believe we are right to note that Something Happened is that it happened and is happening to all segments of American and other technophilic cultures. It may be easiest to see in the United States, where there are virtually no cultural institutions or governmental backstops against cultural change than in European nations where traditions and various quasi-governmental or governmental institutions and requirements might slow a change like this. (E.g. studying for A- and O-levels might be a national reinforcement for traditional, taxonomic knowledge and layered thinking.)
Prior analyses miss the point: neither the computer nor “the Internet” is responsible for the "short now" and the alterations we are seeing in consciousness. The Internet is the connection between various networks – primarily e-mail, Usenet, and the world wide web. It is little more than TCP/IP and the hardware of routers and hubs that should amaze a space alien and NSA officer alike. An apparent "acceleration" has been palpable since the 1790's, when S.T. Coleridge and W. Wordsworth complained in the "Preface" to the second edition of Lyrical Ballads that newspapers were making the world transient.
No: let us be frank: what has made the change is the cell phone and the devouring of “the Internet” by the world wide web and the decision to open the web to .com's. When its history is written (on flash card), the allocation of the .com's will be the moment that changed everything. What's more, my bedraggled generation knew it at the time. We sported “The Internet's full: Go Away!” t-shirts when AOL was allowed onto e-mail. We were not thrilled that CompuServ was going to invade our paradise. The idea that graphics were going to be expected on an HTML site struck me as vulgar. Even after every thirteen year old wrote Wikipedia articles on the “company” that he had made up with his best friend (Zack n' Tay Limited Game Designers, yo) and went into war mode because it was deleted that night, the mind of the generation did not change.
Supplement vs. Replacement
Carr's central thesis – the loss of depth to knowledge – was underway the moment hypertext's promise began to be diverted into a syntax for pictograms, but there was no guarantee that it would dominate the American and Canadian mind. Hypertext and graphics are fundamentally in tension. I was angry in 1997 because I saw pictures taking over, when hypertext offers limitless learning. The promise of hypertext had been an eternally expanding, encyclopedic, structure to knowledge whereby I could make all of my footnotes link, all of my allusions link, all of my obscure jokes link to a page where I explain them, and, were I a fiction author, I could offer layers of additional comment by hypertext. HTML could mean layer upon layer of meaning placed additively onto a piece of information, with the links acting as valves that would suggest an hierarchy of topic, but it never was used that way much, due to the preference for graphics, and the Babel-like quarrel of languages, fonts, and monitors. However, that model of hypertext – lateral linking to supplemental information – suffered at the hands of a commercial impulse to replace knowledge, and the .com HTML virtually strangled it in its crib with a world wide web where every site was illustrated first and foremost, and the HTML was a method of getting the pictures to stick and click.
The non-profit and educational use of hypertext to supplement never quite died, even if it never developed. It is part of Wikipedia, for example, and some non-profit bloggers use it (e.g. DailyKos, Crooks and Liars, ThinkProgress). However, the commercialization of the world wide web meant that a revenue model was built on advertising, and advertising could be metered on “clicks” and “eyeballs.” Therefore, it was and remains in the interests of each newspaper and commercial entertainment site to limit supplemental information. For example, to supplement my allusion to Caligula, I linked to a web page on British history, and it, being a non-profit, had links to other web pages; if you click on one of these, I “lose” you. If I were making money off of every click, I would be a fool to give that link, then. Instead, I would have guidelines like the following:
1. If you need to explain a term, link to an on-site page.From a commercial point of view, HTML and XML coding should be streamlined specifically not to supplement the information, but rather to be null (the explainers) or to replace one revenue click with another. A person who goes to Sports Illustrated online may go to four stories and have no idea of what he or she has read. (There is no alternative but trying it for yourself.)
2. A pop-up that can be sponsored with a single line is ideal for an explainer, as it will not risk losing the “eyeball” or “click.”
3. Other links should be to other exciting stories or compelling teases that are on the site.
4. Toward the top and bottom of any content should be an image or tease that will lead to another click on the site.
My cell phone ate my house!
The first appearance of the mobile phone was as a luxury item. It was a display item and a feature of ostentatious display worthy of Thorstein Veblen. Nowadays, satirists ridicule the way that such a commonplace and cheapening feature of everyday life was once so impressive, but from its first appearance as an alluring object of desire, it then appeared as an object of pure utility, and this was the second phase to the mobile phone's life. The second phase was business. Executives were not branded with these phones, but deliverymen and drivers were, at least initially, and this way dispatchers and bosses could track their locations. Later, middle managers got the phones to be “in pocket” when they left work. The phone was not a reward, but a sort of voluntary ankle bracelet monitor. Next, and perhaps currently (although I doubt it), Christine Rosen documents how men employed cell phones as lek items in their mating dances in single's bars. Rosen's research was 2004, and I suspect that, these days, the gender display has altered, as I observe women using cell phones as items of competition rather than men as display. Each of these moves shows a masking and mutation of underlying function and quality of the technology as tool; the utility of the device is in masquerade. If we look through history, we can also see that this is routine: the important thing to any technology is to obscure its utility (my beloved personal computer was a way to play cool games, after all, and, secondarily and resentfully a way to run spreadsheets).
In the earliest days, cell phone carriers advertised and promoted a high revenue stream that was mature in Japan: the text message. Americans were slow to warm to it, but they succumbed eventually. SMTP is one of the innovations that curmudgeonly computer users of the first PC generations already had strong feelings about. After all, it was AOL and Yahoo's spawn, and it encouraged the sort of vapidity that we associated with the Wrong Sort of People. Consequently, there was a conservative backlash in the online world that made it “unhip.” The backlash simply aged out, and the cell phone sublimated the entire medium (the Internet itself).
“Convergence” was the dream of telecomms from the moment of AT&T's deregulation. One of the interesting features of the reporting, book, and film, Enron: The Smartest Guys in the Room is that Enron's fall was due to dreams of graft with convergence. The thing that provoked investigation into the firm was the splatter of fictional accounting and the overt evil of creating a shortage in energy so as to profit from a futures market in energy, but what made for the run on shares in the first place was Enron's claim to provide video on demand via the Internet with a proprietary pipe.
You see, the idea of owning the pipeline by which consumers would get their “content” has been the mouth- and pants- watering concept that has driven investors in telecoms for decades. Imagine a day when the consumer pays $1 to watch a movie, to the phone or cable provider, and $40 a month to the phone or cable provider, and then can pay an extra $5 a month to the phone or cable provider for super speed, and then $.05 per text message. Meanwhile, the publisher of the film or television show will also have to pay the cable or phone company a very large amount of money to get the product to those consumers, because the pipeline will be proprietary, and the consumers will be watching ads (and paying). You can be forgiven for thinking of your Kindle now.
Is it a wonder, then, that Palm Pilot is all but gone? Are you surprised that Blackberry is dying? The question was whether the telephone would eat the PDA (personal digital assistant) or the PDA would eat the telephone. At home, there was never any question: the television was going to eat the personal computer from the very beginning. The cable and satellite companies had considerable interests in making the television the single portal for all things in the home, including the telephone. The phone companies had and have an interest in ensuring that the telephone replaces the computer. Since these are merged and integrated corporations (e.g. Sony producing movies, games, computers, televisions, and phones), there is ample power. Thus, the personal computer is now a laptop, and the laptop is getting more and more PDA-like, so it can be easily replaced by the telephone.
If I were dystopian, or a technology writer, I would predict that the future would have no PC's in it. Desktop computers would only exist as servers. All consumers would have notebook computers, and these would be phones, and phones would be laptops. Thus, everyone would always pay a monthly connection price for telephony and Internet, and client-side software (programs one actually owns and controls rather than rents) would all but disappear. I suppose such predictions hardly take a crystal ball these days, but there is every chance to be wrong still.
My Cell Phone Is My Home
The cell phone has two mutations of mental space. The first showed in its early use by draymen: it makes the person always “at work.” Work ceases to have a physical definition in the mind of the worker, because there is no way to leave its anxieties and obligations. People have always taken work home, but taking the workplace home is a new feature. As workers have moved into offices and changed their work from physical to mental activity, the work consists of managing personalities, performing business maintenance, and analyzing productive and consumptive patterns. These intense and anxious activities now travel by the web, e-mail, and, most consistently, the instant message. The phone is always on so that the worker can access leisure, and that paradoxically means never having any leisure, because she or he is always intersected with “at work.”
The Instant Message is an always-on application. It is “terminate and stay resident” in mental space. Unlike e-mail, an instant message interrupts. It rings like a phone. Hence, it could “go off” at any moment and is involuntary. It therefore introduces the other spatial mutation: it makes every place “home.” As we never before took the workplace home, we never before took home to work. The network of friends and family that constitute relief and personality maintenance are available for reference at any moment via the instant message in exchange for being available as a resource for each member of that network to the same degree. Those who turn their status to “unavailable” will lose their place in the network, and those who turn off their devices will lose the support they seek.
These spatial mutations have led us to a familiar sight. Where fast food restaurants famously had “no personal calls on company time” policies, they now have counter workers with open cell phones distracted by customers. Executives have trouble both in meetings and in leisure by the presence of the other in the phone. The young in their high school and college classes have surreptitious conversations not by whispering, but by texting, and co-workers go to lunch together to sit silently across from one another and stare at phone screens. The maintenance expectation of the phone's presence is such children turning off phones can create fears of disaster among parents.
What has happened is that the device has amalgamated all spaces into one space. Rather than saying that it is an expanding ego, it is better to think of it as a vanishing self. The “I” of the smart phone user is not growing vaccuously selfish, but rather numbly evacuated, as it has been robbed of home, replaced by a non-space that cannot be considered a person and which has no rights -- just obligations.
Marshal McLuhan reminds us that the myth of Narcissus is not of a narcissist. Narcissus was a beautiful youth who fell in love with his reflection. He did not love himself, but rather loved his image. If you cannot read McLuhan's prose, at least see McLuhan's Wake.
Each piece of technology offers us a beautiful image of ourselves at first. It anesthetizes us. It says, “I am an object of beauty, not a smoke-belching automobile.” It says, “I am the most desirable thing in the world, and I will make you yourself, only better and more beautiful and stronger.” And it delivers on this promise, as well. It is only later, much later, that we discover that our workplaces are twenty miles from our homes and that our air is dirty and that we pay for gasoline every month. Technology is the mechanical Maria of "Metropolis."
Just as the personal computer's actual attraction in its early days was in games and the earliest attractions of the Internet seem to have been pornography (Usenet alt.bin for photos long before the web), so they were the mosquito's saliva. The cell phone said, “I am 'Star Trek' communicator cool,” and “Why wait to get on the web?” and “Why not stay in touch?” It also said, “Be stronger than you are: never be stuck in an emergency again.” (The cell phone users bewildered on 9/11/01 were a thing I won't forget.) Facebook said, “Get Grandma to see the baby” and “Keep in touch with your kooky college friends” and “Share your wiiiiild opinions with your friends, without those nasty people there.” It was only later that we realized that police were routinely turning on the GPS feature of smart phones without warrants and that telecoms were routinely giving the FBI locations eight million times in a year for a single company. Only later did we realize that Facebook generated 11,000 pages of data on a person that it retained after an account of six months had been deleted.
Only later did we realize that our mental spaces had collapsed.
The collapse of hierarchical organization of information that the flat plane of the Google search bar introduced and the borderless one-site, replace-over-add, organization of the commercial web experience, along with the encapsulating, app-friendly, user cosetting approach of Facebook have meant a change of the way that we can think. Nicholas Carr said, on “The Colbert Report,” that he would be unable to write his undergraduate thesis today, because he could not concentrate so deeply anymore. While that may or may not be true (whether we can regain old structures as easily as we lose them is a novel matter), there is a change that seems to be underway for us adults which undeniably shapes the young.
Walter Ong, S.J.'s Orality and Literacy proposed the thesis that literate societies think differently than pre-literate ones. There is a great deal of evidence for his observation, and McLuhan spoke of the same thing. Oral societies have long, long memories. They memorize epic poems word perfect. They hold catalogs in their memories. However, they sacrifice analytical functions for this capacity. They are not inductively flexible because not ready to forget. (To accept a concept like the germ theory, one must be ready to abandon humors and airs.) Literate societies displace their memories onto written words and can be fantastic analysts, but they lose capacity for long knowledge and layered culture in the process. In essence, they get a shorter "now" for a more logical one.
What, then, happens when we displace the word onto the search bar? Strictly speaking, we are not literate anymore. We are not relying on books. We are relying on searches. When did you last look something up with finger and eye? What happens to memory when there is never, ever a need to reach for a book?
Let me repeat that for a moment and add all of our elements together: home, work, school, bank – all of these are the phone. “You” are nowhere and all places simultaneously, just as each of those places is “here” and “now,” although time is not stacking in memory, but rather on a remotely stored server 'wall' or blog. There is no shelf of books, no order for those books, no searching through such a shelf, and no top or bottom. Nor do you have to travel to a place, or even find an Internet site to search on, to gain information. It is possible to say, as more than one student of mine has said, “I don't go on the Internet. I just use Facebook.” This is because the distinctions have no conceptual meaning and no active difference. There is no place, no site, because all are one. When that occurs, then what happens to memory? What happens to organizing time?
If we want to be truly chilled, what happens to culture? Narcissus has an image of himself reflected to him so that he believes he is seeing beauty, believes that he is seeing the beauty of himself, but it is only an illusion. Narcissus, we recall, starved to death. He could not look up from the small reflection before him to know where he was, what time it was, or who he was.
There is no condemning of this or that thing. The tool is not the problem, after all. It is our tool-mind, our innately cybernetic culture -- we form the tool to resemble us, and soon we resemble the tool. Perhaps we can realize why we are hungry.