Microsoft, Meta, Google, Amazon, X, OpenAI and TikTok announced a voluntary accord meant, allegedly, to help fight disinformation on their platforms. I have read the accord, and I am afraid it appears to be mostly posturing.
First, the accord emphasizes that:
We acknowledge the importance of pursuing these issues with transparency about our work, without partisan interests or favoritism towards individual candidates, parties, or ideologies, and through inclusive opportunities to listen to views across civil society, academia, the private sector, and all political parties.
This very much reads like an excuse to pull back on the bad actions of one party if others are not indulging in the same level of deep fake or disinformation technology. Give the far-right turn Elon Musk has taken with X, there is the very real concern that this accord will not be about honestly dealing with the issue, but rather be about the worst kinds of bothsider-ism.
Second, the accord does not promise to remove disinformation or deepfakes. The section that deals with consequences, such as they are, does not promise to remove the disinformation, merely to label it:
Seeking to appropriately address Deceptive AI Election Content we detect that is hosted on our online distribution platforms and intended for public distribution, in a manner consistent with principles of free expression and safety. This may include—but is not limited to—adopting and publishing policies and working to provide contextual information on realistic AI-generated audio, video, or image content where we can detect it so it is clear it has been AI-generated or manipulated. In considering actions, operators of online platform services will pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression
Now, they do say their actions are not limited to providing contextual information, but they do not even mention the possibility of removing the disinformation and deepfakes. This just reinforces the idea that they will allow that information to remain in an attempt to appear unbiased or defenders of free speech.
The accord also does not commit to increasing the number or usage of in-house humans to moderate content. There is one mention of using “content moderation services” compared to several mentions of technological solutions. Solutions that are lagging behind the imitative AI itself. Without involving people, the odds of finding this material are lower than they need to be.
Basically, this accord does not commit the companies to removing disinformation, leaves open the idea that this is a symmetrical problem and thus the idea that treating one side harsher than the other is on its face unfair, and does not commit to the most effective detection and methods —increased human driven moderation. It isn’t going to help solve any disinformation problem. In fact, it might make the problems worse.
If people believe these companies are taking steps to remove or otherwise limit disinformation, they may be more likely to believe material they see on these sites. After all, didn’t these companies make a big deal about all the anti-disinformation work they are doing? No, if we want to prevent these problems, we need to take actual action.
We need to make AI companies liable for the output of their systems. We need to make sure any moderation is done not with fairness to political factions in mind but focused on the needs of the information consumers. We need to ensure that these companies don’t fall back on uncertain and unproved tech solutions but use known best practices, including human moderation. Leaving dealing with disinformation to these companies is clearly not going to help anyone.
Except the disinformation peddlers themselves.