Climate change doesn’t just show that unregulated economies are a bad idea; it also proves that simply organically growing technology isn’t a good idea either. Creating a technology like atom bombs or nerve gas and then controlling it later may have worked so far but just barely.
The electric car didn’t happen by standing by and watching technological growth. It does involve curtailing the usage of gas cars and here the government should be even more aggressive.
Despite that success, the Democratic Party platform remains fairly random on research:
We will also build on the foundation of the Obama-Biden Administration’s Cancer Moonshot to break down silos and accelerate research into cancer and cancer treatments by creating an agency with the sole mission of finding new cures and treatments for cancer and other diseases.
If we are purely after a bump in average life span than heart disease is still the number one killer. Not only is cancer research not well justified but we lack even the intellectual tools for making these kinds of decisions.
Instead of assuming everything is possible and desirable, we have to pick our spots and potentially even discourage or ban some kinds of research or applications.
The basic social contract we operate by was somewhat explained by Francis Bacon in the Novum Organum in 1620:
The men of experiment are like the ant, they only collect and use; the reasoners resemble spiders, who make cobwebs out of their own substance. But the bee takes a middle course: it gathers its material from the flowers of the garden and of the field, but transforms and digests it by a power of its own.
Our society gives immense wealth and power to those that transform because we associate their efforts with progress and sometimes it is. However we can’t continue relying on such a naive approach to improvement.
Case study — neural interface
www.nature.com/… heart breaking that there are already issues
The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.
“She refused and resisted as long as she could,” says Gilbert, but ultimately it had to go. It’s a fate that has befallen participants of similar trials, including people whose depression had been relieved by DBS. Patient 6 cried as she told Gilbert about losing the device. She grieved its loss. “I lost myself,” she said.
“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”
A technology that allows a bi-directional interface between human brains and computers might not be around the corner but now is the time to decide if it should be encouraged or prevented.
There are almost endless potential ramifications to such a technology, potentially culminating in the end of humanity as we know it:
- Could their be an arms race of sorts where only the latest interface and software makes us fully able to participate in society? For instance two users on the same model are able to communicate telepathically.
- As patient 6 above shows even very rudimentary versions of this technology can be addicting beyond anything developed before. If we can now prevent seizures then how long before we can block psychological pain or induce full ataraxia?
This is research that can be funded or not funded so we can’t keep pretending we are just passively watching development or that it makes sense to throw money at the creators of the first applications coming after years of government funding the research.
Amnesty International is already taking a position on banning facial recognition software. But the dangers of neural interfaces for racist misuse are orders of magnitude more scary. Some prisoners are already offered GPS monitoring as a substitute for jail. Accepting a chip in your brain as an alternative to prison isn’t much of a stretch.
Can technologies be stifled?
It’s an asymmetric battle to get a technology banned but banning research can be effective. Look at the chilling effect a temporary ban on new lines of stem cells had.
It’s in our power to dramatically slow or stop a technology like neural interfaces from being developed but starting early is the key. The closer to fruition a technology comes the more money and lobbyists will be backing it.
Amnesty International and the Center for Human Technology are on the right track in their fight against abuse of new tech but a more effective solution is to target technologies long before they are developed.
The first step is to stop falling for bait and switch. We have to be aware of the worst application of a technology instead of just raising up its best applications. As it turns out there are many different ways to provide power, other than nuclear, but not so many different ways to blow up the planet.
What’s the worst application of a neural interface? Potentially it’s slavery and dictatorship. You might not even need an invasive neural interface to achieve that. Train a machine learning algorithm to detect disloyalty and then scan brains while asking about a dictator.
Even as a thought exercise its difficult to come up with a form of this technology that doesn’t have abusive potential that might outweigh its benefits. If you can cure paralysis then you can cause it.
Sadly we are almost entirely missing a philosophical framework for evaluating which types of research to fund and which to disallow. We know for certain that “market forces” is not a reasonable way to make these decisions but what is?