That’s a rhetorical question. Of course we haven’t. And we won’t until we and enough of the public take control our commons, physical and digital, including system wide networks to individual contributions. We’ve trashed much of our physical commons, and it’s more of the same in cyberspace.
We need commons. I cannot exist in my own private domain. Every time I step out of my property, or even receive communication or supplies from beyond my property, I rely on infrastructure and nature like everybody else.
In the physical world, entropy happens, but we have some choices. Though we can’t have truly zero waste or circular economics, we can plan lifecycles for products and packaging, as well as, incentivize participation in efficiency, recycling and other industrially planned programs.
Cyber space is filled with our data. Which datasets are owned by private entities? Who controls how they are used? How much can I control what I contribute, such as this diary? How much can advertisers add to our data exchange bandwidth? Must every technology, avenue and platform be fields of unfettered marketing access?
Artificial intelligence (AI), with roots going back decades in neural network and other predictive algorithms, relies on input data sets to make predictions. Again, who owns and controls the data? And now, given recent generative applications of AI, we can ask if applications should be limited.
China’s national regulations on AI, possibly with blacklisted datasets, have been joined recently by Europe’s sweeping efforts. In the US, states and the executive branch have been issuing guidelines in the wake of developments with generative AI, while private companies institute their own policies. Nations are also competing against each other in development and ownership of AI and datasets, the infrastructure used to run the systems, as well as, accountability for what is being run.
At present, the EU and China do seem to agree on taking a more active approach to regulating AI and digital ecosystems relative to the U.S. This could change, however, if the U.S. were to pass the Algorithmic Accountability Act. Like the EU AI Act, the Algorithmic Accountability Act requires organizations to perform impact assessments of their AI systems before and after deployment, including providing more detailed descriptions on data, algorithmic behavior, and forms of oversight.
Should the U.S. choose to adopt the Algorithmic Accountability Act, the regulatory approaches of the EU and the U.S. would be better aligned. Even though regulatory regimes may align over time, the current trajectory of digital fragmentation between the EU and US on one side, and China on the other, is set to continue under the current political climate.
Several related points are worth mentioning here
First, there are no independent, self replicating robots yet. Until then, we control cyberspace and AI largely through how much energy we put into it. If we live within climate friendly power consumption means, then AI and other cyber issues will be more manageable.
Next, benefits of AI and digital citizenship will never come for free. Democratic public ownership and accountability requires us to participate and speak up as we can. It won’t be easy either, especially given the large inequities wrought by trickle down and neoliberalism. Top heavy ownership of datasets threatens to exacerbate inequities and stifle innovation in AI development. We will have to defend our rights, possibly as self sovereigns.
Self-sovereign identity puts people in charge their own digital identities. It means that individuals have choice and sovereignty over their digital selves to the same degree we have control over our physical selves. This aligns with the fact that we all have inherent dignity that does not come from being born in a certain place or with certain attributes other than being human.
The alternative to public participation and democratic ownership seems to be feudal dictators running it all and hoping that they will be benevolent to us. Facebook, for one, is asserting its sovereignty, with it’s own constitutional conventions and supreme court to go along with its representation in our courts too. Long standing concerns about tech corporations behaving as autocratic nation state actors remain.
The reality is we don’t need dictatorial owners. They are a blight on efficient production, distribution, sustainability and innovation. Nobody deserves to make or lose $28,000,000,000 in stock transactions using our network, communication and social commons. They didn’t contribute that much. They don’t know that much. And track records show that big corporate owners are poor arbiters of valuing our communities, despite their best efforts.
Finally, I will finish up with a couple of videos showing that we are only scratching the surface of this discussion.
First is Constitutions 2.0: Self, Sovereignty, and Scale in the Age of AI. in which issues above and others are discussed by Holberg Laureate Sheila Jasanoff in conversation with Prof. Achille Mbembe and Prof. Zeblon Vilakazi.
But it goes so much deeper. In the end, we may be dealing with what it means to be conscious, who or what has consciousness, and where does that consciousness reside. In a comment last week, I referenced a recent discussion on the possibility of consciousness extending beyond brains. Many issues will be raised if we need to consider cyber or algorithmic consciousness. Network and code owners should be expected to be slow in admitting any revelations and oppose any loss of their control.
I really want to explore this discussion on consciousness further, but have already taken too much space and attention. Please enjoy the linked videos as you wish. I am interested in presenting on this further and may do so in future ACM installments.