Last weekend I attended the Singularity Summit at Stanford University. This is about the future implications of AI and nanotechnology. I write
a blog about macroeconomics so in addition to presenting the science I evaluate the macroeconomic implications thereof.
Last weekend I attended the Singularity Summit at Stanford University. Let's
review a few things first. This is the web site for the event:http://sss.stanford.edu/
While this is, on the surface, the thesis that advancements in artificial intelligence
and nanotechnology are about the take off there are substantial economic implications.
What is the Singularity?
In math and physics the expression "singularity" means something
akin to things no longer behaving the was they did but instead growing "by
leaps and bounds." If you want a dictionary definition - singularity: a
point at which the derivative of a given function of a complex variable does
not exist but every neighborhood of which contains points for which the derivative
exists. In short it's a place where the rate of change (the derivative) grows
gigantically.
First a capsule from the web site:
"In futures studies, the singularity represents an "event horizon"
in the predictability of human technological development past which present
models of the future cease to give reliable or accurate answers, following the
creation of strong AI or the enhancement of human intelligence. Many futurists
predict that after the singularity, humans as they exist presently won't be
the driving force in scientific and technological progress, eclipsed cognitively
by posthumans, AI, or both, with all models of change based on past trends in
human behavior becoming obsolete.
In the 1950s, the legendary information theorist John von Neumann was
paraphrased by mathematician Stanislaw Lem as saying that the ever-accelerating
progress of technology
gives the appearance of approaching some essential
singularity in the history of the race beyond which human affairs, as we know
them, could not continue.
What these dudes are saying is that technology is progressing at an ever increasing
rate and that soon this will have a qualitative as well as quantitative aspect.
In short: technology is about to change mankind is a manner that will be irreversibly.
The best expostulation of this is in Ray Kurzweil's "The
Singularity is Near." That web site is updated regularly and contains
any major paper or talk on the topic - pro and con.
In Kurzweil's words: "What, then, is the singularity? It's a future period
during which the pace of technological change will be so rapid, its impact so
deep, that human life will be irreversibly transformed. Although neither utopian
or dystopian, this epoch will transform the concepts that we rely on to give
meaning to our lives, from our business models to the cycle of human life, including
death itself. Understanding the singularity will alter our perspective on the
significance of our past and the ramifications for our future. To truly understand
it inherently changes one's view of life in general and one's own particular
life."
What Are They Talking About?
The thrust of what Kurzweil is postulating is this:
1) increased capacity in brain scanning technology will soon (he's talking
something like 20 years) be able to understand the actually mechanism which
the brain uses
2) advances in the capability of computer processing technology will continue
per Moore's
Law. Note here that according to Kurzweil this can be done with MOSFET
technology and does not require molecular
computers which are the next vista
3) there will continue to be significant advances in Artificial Intelligence
(AI) theory and software.
Put them together and you have a computer which is capable of human though
and (I'm simplifying things here) can think as well a a human can but at something
greater than 1 billion times the speed of our woefully slow brains. In short,
the human brain is a woderful this but it is slow as compared to the speed of
a computer.
In general, others who are accepting of Kurzweil's these in the main are less
aggressive about the timeline. Kurzweil retorts by noting (and I may have the
numbers slight inaccurate but the concept is accurate) that when the human genome
project started was supposed to take 15 years. When half that time was complete
the project was 1% finished. The other 99% was completed in the next 5 years.
That is the nature of nonlinear growth. In practice the work that gets done
in the early stage enables the subsequent work to get done faster. We find new,
faster methods as our experience and data base increase.
Before Kurzweil
In the limited reading which I have done one of the earliest talks on this
came from the eminent physicist Richard Feynman is his address
to the American Physical Society in December 1959. This was when computers
took up massive rooms and no one knew what MOSFET technology was. Feynman saw
the possibilities as being limited only by the constants and laws of physics.
Feynman mused about nanotechnology. "What could we do with layered structures
with just the right layers? What would the properties of materials be if we
could really arrange the atoms the way we want them? They would be very interesting
to investigate theoretically. I can't see exactly what would happen, but I can
hardly doubt that when we have some control of the arrangement of things on
a small scale we will get an enormously greater range of possible properties
that substances can have, and of different things that we can do."
The Summit
Getting back to the Singularity Summit itself, I must say that I was stunned
by the reactionary attitude of many of the speakers and the audience. Many of
the speakers expressed a dystopian concern that Kurzweil's thesis would lead
to some highly advanced state of disaster.
In addition to Kurzweil, there were three speakers who impressed me. One was
Sebastian Thrun. This was the dude behind Stanford
University's winning entry in the DARPA
(Defense Advanced Research Projects Agency) Grand Challenge. This is about
autonomous driving as in a vehicle with no driver and no remote control. In
the DARPA 2005 challenge the vehicles drove over 132 miles of Mojave desert
dirt road given the course only 2 hours before the "green flag."
I loved Sebastian's talk because it was light in spirit ("We loaded the
map in 30 seconds abd spent the remaining 1 hour 59 minutes and 30 seconds drinking
beer") and because he had actually done something. The year before, the
race ended in disaster as no entry completed even 10% of the course. Next year
DARPA
is taking it to the streets by sponsoring an urban version of this - so
look both ways before crossing. Sebastian also graciously pointed out that Stanford's
team just barely beat the 2-4th finishers, "It's not a victory for a specific
institution. It was a victory for the field." He also spiced-up his speech
with hilarious slides of the misfortunes (alleged or otherwise) of Stanford's
rival in sports - UC Berkeley.
There is another interesting thread to this DARPA story. Rather that throw
money at folks as grants the said: You pay for you own stuff and we will offer
a $2 million cash prize to whoever completes the course faster than everyone
else.
This is not merely a whim on the part of the Defense Department. The issue
of autonomous driving takes on a somewhat less than light note when the objective
is moving cargo about Iraq. This is all about not getting soldiers blown up.
The military is supposed to make a third of their vehicles autonomous within
the next 10 years. On the social side, it might be the case that we can save
lives by replacing drunk drivers with autonomous one. This is not just some
techno nuts speaking. Read
this from Bill Lutz. He is Vice-Chairman of GM and a believer.
In short, major strides have been made in AI. Computers can play chess on the
level of the grand master and they can now drive cars and will soon be able
to do so in a practical day-to-day manner. One of the speakers comments that
while something like chess playing was regarded as an example of human intelligence,
now that a computer can do it some dismiss the accomplishment. In short, as
activities one indicative of intelligence are accomplished by computers the
set of intelligent things is reduced by the luddites.
The second speaker who impressed me was Eric Drexler. Eric is the "father
of nanotechnology." Some of Drexler's works can be found http://www.e-drexler.com/
starting here.
The third speaker who impressed me was a guy named John Smart. And this guy
is really smart. You can get some information about what he does at http://www.accelerating.org/.
The reason that John made such an impression on me was that he was the one person
speaking who seemed to have a comprehensive idea in his head as to what this
(the good and the bad) was all about.
Getting Back on Topic - Sort of
Fine. What does all of this have to do with macroeconomics? Real simple. This
is the next technical frontier. Apart from the brain scan/AI vision of Kurzweil
- nanotechnology offers the potential to make things very inexpensively. Nanotech
may be able to produce building materials, food, fuel, and you-name-it without
resorting to the energy intense Industrial Revolution paradigm of mining raw
materials, refining them and moving them someplace else to be manufactured.
Think of this a China in a Box. (I mean the country not the stuff you put in
the dishwasher.) Nanotechnology is the ultimate form of recycling: Give me your
tired, your poor. Your worn out atoms yearning to breathe free. And I'll make
you a hamburger and a 2x4.
In short the economic vision here is gigantically greater Productivity (GDP/hour
worked). To me this represents nothing short of a postindustrial economy. Folks
would have what they need: food, shelter, clothing, medicine with cost being
no object because it is so low.
To review what I last wrote on this see RateWatch
#481 A Very Different Take on Productivity
I must add that after attending this summit I was stunned at the reactionary
fears of so many people there. Clearly, there remains even in academia a measure
of luddite technophobia. Maybe that's a bit unfair. It may be nothing more than
the PC regarding global warming and the fears in the 1970's that zapped nuclear
power and, well, that helped create global warming. Let me take one more step
back. I am not stating or implying that all of this is without risk. As long
as the risk of creating a disaster is minuscule the reward makes the whole thing
worth trying.
A perfect summation to this piece is the quotation printed on the back of the
booklet for the Summit. "It is hard to make predictions, especially about
the future." -- Yogi Berra
Some web sites about this topic:
Singularity
Institute for Artificial Intelligence
Ray
Kurzweil
Acceleration
Studies Foundation
If you have something to add to this discussion please post a comment on the
blog.
Dick Lepre