The Web’s creators never made secrets of their goals and dreams. In fact, they’ve been remarkably forthright and even outspoken about what they were doing. The story of hypermedia – the story of hyper-linking, hyper-mixing, and hyper-sourcing – is the story of how computer engineers conceived a role for themselves as mighty social architects.
You can read Part 1 here. It concluded with this:
What we must understand, therefore, if we seek a clearer picture of the future, is the history of the inventors’ dreams. By exploring how those dreams were made real during the decades of hyper-linking and hyper-mixing, we may gain a better sense of what will come in the decade of hyper-sourcing and beyond, as their remaining dreams reach fruition. There is still time to ensure those dreams don’t become our nightmare.
Part 2 continues below the fold.
Ideas for providing instant, inexpensive access to all the world’s information emerged long before the Internet, before its precursor the Arpanet, and even before transistors. And so did ideas about how such access would inaugurate a new age of peace and prosperity.
In World Brain, a 1937 article about the future of encyclopedias, H.G. Wells predicted that advances in micro-photography and film projection would lead to "the intellectual unification of our race." It was an outcome he clearly favored. His description of the march of human progress was a thinly disguised strategy for achieving universal understanding. The first step would be to organize the sum of human knowledge into a completely accurate and authoritative Permanent World Encyclopedia. It would then be copied to microfilm, and ultimately distributed to every individual on earth. Voila. World peace.
That was just the beginning. Wells predicted the system would take on the properties of a new planet-spanning organ... an "all-human cerebrum" endowed with "the concentration of a craniate animal and the diffused vitality of an amoeba."
Though Wells was not a technologist or inventor himself, he paid attention what was going on in those communities. The commercial microfilm business of the day was undergoing huge breakthroughs, and key innovators had targeted markets in libraries and education. Given the widespread wish for peace and progress, even as Europe was on the verge of yet another devastating war, betting that microfilm could save the day was better than having no bet at all.
At the time, Wells was a highly regarded historian and political essayist. Today he is better remembered for the science fiction classics War of the Worlds, and The Island of Doctor Moreau. But World Brain was no exercise in make believe. Wells foresaw that knowledge would become shareable as never before, prompting a grand reshaping of political and economic power. That side of his prediction did come true. It remains to be seen whether his certainty of a happy outcome was equally warranted.
* * *
General speculation about machine brains was largely suspended during World War II. But, just a few weeks before the bombings of Hiroshima and Nagasaki punctuated the end of combat, American physicist Vannever Bush reopened the discussion. Bush was then the most influential scientist within the US government. He had been a chief organizer of the Manhattan Project, and also headed the office which directed the work of thousands of scientists in support of the military effort.
Now Bush was devising projects the scientific community could undertake after the war was completely over. His July 1945 article "As We May Think," sketched out a device intended to provide an "enlarged intimate supplement" to human memory. He called it a Memex.
Once again, the future was projected from the tools at hand... once again, microfilm. Bush felt that capturing and storing all the world's data would prove a relatively straightforward task. When an encyclopedia could be compressed into the volume of a matchbox at the cost of a nickel, it followed that an entire library would be able to fit within a desk at an affordable price. Such a desk could then be equipped with a set of projectors, screens, and automated tools that could move a requested piece of film into place for reading. Miniature cameras and supporting equipment would be used to insert new material. He even speculated about long-term possibilities for "direct" transmission of information into the brain, though making it clear that a more basic problem needed to be solved first.
Bush expected that the most daunting technological challenge of the Memex involved engineering a process that could tie its stored items together in useful ways. The system would need mechanisms for recording the "trails" left by researchers as they investigated various topics. Those trails would include both the threads of tangential excursions and the tracks back to original sources. Moreover, the trails would stand as encyclopedia entries of their own, open to re-exploration and augmentation by others.
The modern Web is deeply rooted in that vision of the Memex’s "essential feature," which Bush called "associative indexing."
Given the fresh experience of war, Bush was well aware of the dangerous directions science could take. Nevertheless, like Wells, he valued collecting and redistributing information as an end in itself, believing that the "great record" of human knowledge should be wielded for the "true good" of humankind. The onset of the Cold War, however, led to circumstances in which highest priority was given to the research demands of those leading the nuclear arms race.
* * *
It was no surprise that Bush favored microfilm for the Memex. What was the alternative? Vacuum tube computers were monstrously expensive. They were hot, bulky, failure-prone, difficult to program, and simply not up to the task of cheap and convenient information sharing.
The first computers built after 1945 were Princeton’s MANIAC with about 6,000 tubes, and the University of Pennsylvania’s ENIAC with over 17,000. Both were used to design hydrogen bombs. Vacuum tube computing technology advanced rapidly during the Cold War, culminating in the late 1950s with the 55,000 tube machines at the heart of the SAGE anti-aircraft system., At 275 tons with a half-acre footprint, that was as big as they got. Transistors were coming into general use around that time, and research on integrated circuits was well underway.
J. C. R. Licklider, a psychologist who had worked on the SAGE project and had gone on to become a computer scientist, picked up the baton of the intelligence-accelerating crusade in 1960. In a widely-read article titled Man Computer Symbiosis, Licklider set out an ambitious new challenge for the burgeoning computer industry.
Till then, computers were considered to be mechanical extensions of thinking power, ideal for speeding up peoples’ ability to answer technical questions. Licklider proposed building machines capable of real partnership in formulating those questions. With time, he predicted, "human brains and computing machines will be coupled together very tightly." Humans and computers would then enter into a symbiotic relationship, each dependent on the other for continued survival.
That first article made Licklider a star. Subsequent books and articles fleshed out the vision even further. In Memorandum For Members and Affiliates of the Intergalactic Computer Network (1961) he laid out the need for protocols and standards that would facilitate collaboration across distinct types of computer systems and programming languages. In Libraries of the Future (1965) he proposed a desk-like contraption reminiscent of Bush’s Memex, outfitted with a keyboard and a display screen. But instead of local microfilm storage, the desk would rely on a "vital" connection through an "umbilical cord"-like cable into a "procognitive utility net."
Perhaps the most breathtakingly prescient of Licklider’s articles was titled "Computers as a Communication Device." Co-written with Larry Roberts in 1968, it recounted their participation in an experimental meeting conducted via online interaction... the first of its type. Licklider and Roberts concluded that the meeting had been far more effective than a solely face-to-face version would have been. They went on to describe the technological paths that could make such meetings economically viable on a vast scale.
Licklider’s technical achievements and writings were further magnified by his strategy of acting like a Johnny Appleseed for computer networks. As his career moved between academia, government, and the private sector, he used his positions to fund university computer science programs, to stimulate innovative private/public collaborations, and to groom proteges whom he could later help with research and development awards.
Two colleagues stood out. Roberts went on to head the US Advanced Research Projects Agency (ARPA) which launched the Arpanet. Roberts is now celebrated as one of the fathers of the Internet. Doug Engelbart, who coordinated that first experimental online meeting, went on to lead the ARPA-funded Augmented-Human-Intellect Research Center (ARC) at the Stanford Research Institute. While at ARC, Engelbart and his co-workers were the first to demonstrate, among other things, hypermedia.
* * *
If you’re reading this article on a computer, it’s possible you’ve got one hand on a mouse. Patented in 1970, the official name for the mouse is "x-y position indicator for a display system." Doug Engelbart had about twenty patents to his credit before that one, many related to video display technology. The mouse, those patents, and many other significant achievements stemmed from a lifelong goal he set out for himself in the early 1950s,. "As much as possible, to boost mankind’s collective capability for dealing with urgent, complex problems."
Engelbart had read Bush’s "As We May Think," and was profoundly influenced by it. That, plus practical experience as a Navy radar operator in the Phillippines, helped trigger ideas about electronic Memex consoles. Why not display stored text using cathode ray tubes instead of microfilm projectors? Compelled by the grand vision of human-augmenting technology, Englebart concluded that making great leaps was far more important than making great gadgets. So, when he formulated his rather dramatic life goal, he decided that organizing a purpose-driven "pursuit strategy" was itself a complex and urgent priority.
After leaving the military, Englebart followed his own advice, enrolled in UC Berkeley and quickly earned a Ph.D. in Electrical Engineering and Computer Science. Soon after that he joined the staff at the Stanford Research Institute, a leading edge think tank with a mission statement he found appealing: "We are committed to discovery and to the application of science and technology for knowledge, commerce, prosperity, and peace."
Engelbert continued to work out the details for a variety of Bush-inspired ideas, and eventually won funding from the Air Force Office of Scientific Research to write a report that pulled everything together. The result was "Augmenting Human Intellect: A Conceptual Framework" (1962), a book he later referred to as his "Bible." The parallel is intriguing. Engelbart’s Bible, like Gutenberg’s, punctuated the acceleration of human progress. But where Gutenberg’s marked the first tangible result of long planning, Engelbart’s marked the announcement of a plan whose results were yet to come.
Engelbert’s "pursuit strategy" had by now evolved into a full-blown augmentation classification system. His framework included lengthy descriptions of human communication and education techniques, linguistically-based learning theories, goal-centered research methodologies, and the prospects for new intellect-augmenting artifacts. He summed up by advocating a "bootstrapping" approach designed to boost human power for boosting human power. The next step, therefore, would have to be a fast-moving effort to build actual prototypes.
More money came through, and Engelbart’s ARC was in business. A key project was the oN-Line System, known as NLS. Its first public presentation at a San Francisco theater in 1968 was later proclaimed "The mother of all demos." Beyond the mouse, it featured pioneering innovations such as raster-scan video monitors, interactive multi-user text editing, outline processing, hypertext links, and windowed interfaces.
Despite his huge accomplishments, Engelbart’s fortunes shifted in the late 1970s. Building on the foundation of enabling devices he and his ARC team had already created, Engelbart put increasingly greater effort into network-based collaboration tools known as "groupware." But the dawn of the personal computer revolution created exciting new opportunities for talented engineers while fostering the spread of an anti-mainframe ethos. Researchers left ARC and funding dried up.
Engelbart continued to promote his bootstrap concept in the ensuing years. He developed new ways of explaining it, with phrases such as "capability-improving capability," and "getting smarter about getting smarter." He reworked Licklider’s forecast of man-machine symbiosis, consolidating it into a chart called the Co-Evolution Frontier of Human Systems Development and Tool Systems Utilization.
With people like Douglas Engelbart shaping the world, who needs science fiction?
Though Engelbart was the first to build a working prototype of hypertext, he was not the first to use the word, and he was not the one who finally made it available to the public. Hypertext presented special problems. Computer technology had attracted many champions by the middle of the 1960s, but only a few were daring enough to attack the challenge of associative indexing that Vannevar Bush had identified twenty years before.
* * *
If you’re reading this article online, you might be aware that your web browser’s location bar displays its address within a string of letters starting with http. That familiar sequence stands for "hypertext transfer protocol."
As similar sounds reveal, the etymology of hyper reaches back to the same roots as over and eave, the place on a roof from which water drips, and where an eavesdropper might hide. The word hyperbole – throwing a ball (or an idea) far over its target – added the tone of exaggeration and amplification. If you ask a typical person for a definition of hyper, the answer you might likely hear is, "a lot" or "over active." It’s fair to say, however, that the founders of hypermedia intended a sense more like extending, spanning, and reaching over.
So, who put the h in http? And why?
Hypertext was just one of many words coined by Ted Nelson, a Harvard-educated sociologist with a famously severe case of ADD. He defined it as "a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper." Its genesis was a graduate school project he never completed, a radical attempt to create a computer-based storage and retrieval system for his own easily-distracted use.
Nelson had dreamt up a personal information system "with every feature a novelist or absent-minded professor could want." Though it grew far beyond his ability to bring it to fruition, five years later, in 1965, he condensed the ideas for a paper at the venerable Association for Computing Machinery.
Nelson’s vision captured the essence of the associative indexing approach suggested twenty years earlier by Vannever Bush, whom the paper cited at length. But Nelson pushed deeper and into new directions with ideas about profusely overlapping non-sequential relationships between electronically stored text and images. He suggested file structures that could support complex, changing, and indeterminate connections between items while content was added and modified. As a result, one item could effectively contain others within it, and all could maintain reliable links back to original sources and authors.
Nelson created an array of new terms such as docuverse, transpointing, and transclusion to help convey his hypertext dream. His overarching vision synthesized prior concepts of a democratic, universal, evolution-advancing library emerging from the swirl of human creativity. Hoping to fill the world with a network of Literary Machines, he embarked on a crusade to commercialize that vision under the brand name Xanadu.
Thorough attention to authorship was inherent to Xanadu’s business-friendly design. A fundamental principle was to ensure that document links would not break, especially those between content and its creators. That would ease the ability to provide for credit, payment and protection schemes important to authors and intellectual rights holders.
Xanadu came to epitomize the idea of a reliable archive from which content would never be lost. There would be no misplaced links, no "page not found" errors. It promised the liberty to link without cheating an author, and freedom to mix and remix without worries of plagiarism.
That ambitious vision had a price. Nelson wanted to empower users with revolutionary new screen-based tools. He had little desire to compromise by babying them with ease-of-use gimmicks and familiar paper-based metaphors. Like Englebart, who became his friend, he preferred "to alert people to what can be, rather than take it down to the lowest common denominator."
Nelson’s vision of hypertext drew far more interest than his business plan. Like his student project, Xanadu never reached completion. It was a proprietary enterprise saddled by a strict list of requirements. Hypertext, however, was a relatively flexible idea that others could quite easily reframe, reinvent, and, if they so pleased, water down.
Part 3 will focus on Tim Berners-Lee.