stupidity, Some say in superintelligence (with apologies to Robert Frost):
Computer scientists may now be in the process of building [artificial intelligence] AI with greater-than-human intelligence (“superintelligence”) [which] could become so powerful that it would either solve all our problems or kill us all, depending on how it’s designed.
The above is from a review in Scientific American of new book, by "documentarian" James Barrat,
"Our Final Invention: Is AI the Defining Issue for Humanity?", which the reviewer sees as
"a counterpoint to The Singularity Is Near and other works by Ray Kurzweil."
Contrary to Kurzweil, the reviewer and the new book argue that:
[to] design safe AI ... looks to be a massive philosophical and technical challenge... Unfortunately ... dangerous AI is easier and thus likely to come first.
Further:
economic and military pressures pushing AI forwards ... would need to be harnessed to avoid dangerous AI. ... A ban on high-frequency trading might not be a bad place to start