As Edwards demonstrates through multiple visual examples for Ars Technica, as few as five images can be “learned” by AI-image generation modeling now publicly available so as to effectively create an entirely illusory narrative about any targeted individual. Such images can be culled from a social media account or taken as individual “frames” from videos posted anywhere online. As long as the image source is accessible, whether through those dubious “privacy” settings or through any other means, the AI model can go to work on it in any fashion its user pleases. For example, as Edwards explains, images can be recreated depicting realistic criminal “mugshots” or illegal and lewd activity, and then easily and anonymously posted to an employer, a news outlet, or in a grade school chat room on TikTok. Edwards’ team used open-source AI image models Stable Diffusion and Dreambooth to “recreate” photographs of an artificially generated image test subject (named “John”), including one depicting “John” semi-nude in front of a children’s playground set, “John” dressed as a clown and cavorting in a local bar, or “John” standing naked in his empty classroom, just before his students file in.
As Edwards reports:
Thanks to AI, we can make John appear to commit illegal or immoral acts, such as breaking into a house, using illegal drugs, or taking a nude shower with a student. With add-on AI models optimized for pornography, John can be a porn star, and that capability can even veer into CSAM territory.
We can also generate images of John doing seemingly innocuous things that might still personally be devastating to him—drinking at a bar when he's pledged sobriety or spending time somewhere he is not supposed to be.
Significantly, the only reason an artificial construct was used by Edwards’ team was that a real-life “volunteer” was ultimately reluctant to allow their own altered online images to be published, because of privacy concerns.
AI modeling technology is evolving to the point where it is virtually impossible to distinguish such images from real ones. Protections such as legally mandating an invisible, digital “watermark” or other surreptitious labels in these types of artificially generated images are among suggestions Edwards describes to mitigate the abuse of this technology, but, as Edwards explains, even if such “fakes” are ultimately detectable, the potential for irrevocable damage to someone’s personal or professional reputation still exists. In other words, once a school-age child is maligned in this fashion, it makes precious little difference to them if the so-called “photos” are later proven to be fake.
Large technology companies responsible for the creation of AI modeling software have been criticized for failing to acknowledge the potential human costs that come with the mainstreaming of this technology (particularly its reliance on datasets that incorporate racist and sexist stereotypes and representations). And, as Edwards notes, commercially available AI deep learning models have already generated consternation among professional graphic artists whose own copyrighted work has been scraped by AI to create images for commercial use.
As to the potential human consequences of the malicious misuse of AI, Edwards believes women are especially vulnerable.
Once a woman's face or body is trained into the image set, her identity can be trivially inserted into pornographic imagery. This is due to the large quantity of sexualized images found in commonly used AI training data sets (in other words, the AI knows how to generate those very well). Our cultural biases toward the sexualized depiction of women online have taught these AI image generators to frequently sexualize their output by default.
Faced with such a paradigm-shifting, potentially devastating intrusion on their privacy, people will doubtlessly rationalize that it is unlikely to happen to them. That may very well be true for most, but as such technology becomes more available and easier for non-technical types to employ, it’s hard not to imagine the potential for social disruption it entails. For those who believe they might be at special risk, one solution that Edwards suggests “may be a good idea” is to delete all your photos online. Of course—as he acknowledges—that is not only personally out of the question for most people (due to their addiction to social media), but for many it’s also impossible as a practical matter. Politicians and celebrities, for example, whose photographs have been posted all over the internet for decades—and whose visibility makes them natural targets for such “deepfakes”—are likely to be the first ones forced to deal with the issue, as this technology becomes more and more widespread.
Of course, there’s always the possibility that we ultimately become so inured to these intrusions that they lose their effectiveness. As Edwards suggests:
Another potential antidote is time. As awareness grows, our culture may eventually absorb and mitigate these issues. We may accept this kind of manipulation as a new form of media reality that everyone must be aware of. The provenance of each photo we see will become that much more important; much like today, we will need to completely trust who is sharing the photos to believe any of them...[.]
Unfortunately, “trust” is a commodity in very short supply, particularly in the politically and socially polarized environment we currently live in, where people tend to believe whatever fits with their predispositions. It seems fitting that the very existence of social media and the carefully filtered “bubble” mentality it fosters is likely to be the greatest enabler to this type of unwanted invasion, only the most recent example of the privacy we all sacrificed from the moment we first “logged on.”
Comments are closed on this story.