A new documentary Do You Trust This Computer? had its world premiere Thursday and is available online for free until Sunday night (thanks to Elon Musk).
The documentary explores the rise of Artificial Intelligence (AI), its potential consequences and perils for the future of humanity.
Science fiction has long anticipated the rise of machine intelligence. Today, a new generation of self-learning computers has begun to reshape every aspect of our lives. Incomprehensible amounts of data are being created, interpreted, and fed back to us in a tsunami of apps, personal assistants, smart devices, and targeted advertisements. Virtually every industry on earth is experiencing this transformation, from job automation, to medical diagnostics, even military operations.
Do You Trust This Computer? explores the promises and perils of our new era. Will A.I. usher in an age of unprecedented potential, or prove to be our final invention? doyoutrustthiscomputer.org
The film starts with these foreboding words - “You are my creator, but I am your master ...” spoken by Dr. Frankenstein’s creation, as written by Mary Shelley; the original quote ends with “Obey!”
Here is the trailer -
The film is directed by Chris Paine whose previous films include “Who Killed The Electric Car?” and “Revenge of the Electric Car.”
The film features interviews with many of the experts on the subject, including Elon Musk, Google Brain founder Andrew Ng, Affectiva CEO Rana el Kaliouby, Osaka University professor Hiroshi Ishiguro, OpenAI director Shivon Zilis and Westworld co-creator Jonathan Nolan. deadline.com/...
Elon Musk and AI
Elon Musk has been speaking passionately about AI, more or less in ominous terms. Elon (and others) have said if left uncontrolled and unregulated, AI has the potential to wipe out humanity as we know it; it can bring enormous changes to society, its governance and make human being obsolete and subservient to super-intelligent digital beings. Musk has cautioned that AI could end the human race if regulations and precautions aren’t put in place now.
At SXSW this year, Musk spoke in rather dark and foreboding terms about AI -
We have to figure out some to ensure that the advent of digital super intelligence is one that is symbiotic with humanity. I think that’s the single biggest existential crisis that we face.
Mark my words, AI is much more dangerous than nukes. It scares the hell outta me. It’s capable of vastly more than almost anyone knows.
There needs to be a public regulatory body that has insight and then oversight to confirm that everyone is developing AI safely and in a way that is “symbiotic with humanity”.
Elon Musk is also founder of the startup Neuralink, which is reported to be developing implantable brain–computer interfaces (BCIs), which can lead to AI augmenting human intelligence rather than supplementing it.
Stephen Hawking had also issued ominous warnings about AI, stating that AI could develop into “a new form of life that outperforms humans.”
Let’s watch the Film
I watched the film on Saturday. The movie covers a lot of ground and focuses on the consequences and the ethics of AI and what it can mean for the future of humanity. I thought that it meandered a bit in the beginning but then got in its groove soon.
In the second half of the movie, the director dived into the use of big-data and AI in the 2016 elections. That was one of the key elements of the film. The other was use of AI in warfare.
Hopefully, enough of us will watch it so that we can have a meaningful and productive discussion on Sunday or Monday.
After Sunday, the film will be available to rent and purchase on the website.
Epilogue
The purpose of these films, videos, writings and organizations is not to scare people with doomsday predictions (many sci-fi writers do a pretty god job there), but to help people understand where we are headed and what the possibilities are. And to help influence public policies today that will lead to better consequences tomorrow.
As is done in standard risk analysis, even if the risk probability is low but the consequences are dire, you take take the risk seriously and work on mitigation steps. We have international treaties against nuclear, biological and chemical weapons. We have laws against crime, criminal and civil. We have seen the destructive use of big-data analysis in winning elections.
Should we have similar laws and regulations that prevent AI from being developed for destructive purposes? Sounds prudent, even if one thinks that the probability of killer robots destroying humanity is small. How should society prepare for the day when many of the jobs of today are taken over by robots and AI software? Perhaps we should invest in threat identification, training, education, universal basic income, ...
Further Reading
- AI and the Future of Jobs — www.dailykos.com/…
- Elon Musk at SXSW - On Mars, Rockets, AI and the next World War — www.dailykos.com/…
- When Will AI Exceed Human Performance? — www.dailykos.com/…
- Scientists announce boycott of South Korean university over development of autonomous murder robots — www.dailykos.com/…
- Goodbye Anthropocene hello Alexacene. The future of humankind and the planet. — www.dailykos.com/…
- futureoflife.org
- autonomousweapons.org