Millions—perhaps billions—of social media users routinely post images of themselves online without thinking too much about it, most simply trusting in the fact that since so many others have done exactly the same thing, then it must be “okay.” Since the advent of ubiquitously available digital technology in the late 1990s, pretty much the majority of the human race has embraced the curious notion that memorializing their personal data (such as photographs) into the enormous maw of the internet—a global communication tool and database accessible to nearly everyone—could be accomplished without any personal consequences. The allure and commonality of posting such images has, over the past two decades, become so second nature that most of us now carry a device in our purses and pockets specifically designed to do just that.
However, in light of rapidly evolving advances in Artificial Intelligence (AI) technology that allow such online images to be manipulated and altered, this may be the time to re-think just what we’re doing when we post that selfie or other image of ourselves on Instagram, Facebook, or TikTok. As explained by Benj Edwards, writing for Ars Technica, “New AI image-generation technology allows anyone to save a handful of photos (or video frames) of you, then train AI to create realistic fake photos that show you doing embarrassing or illegal things.”
We’re not talking Photoshop-style pranks, either. As Edwards’ article explains, AI technology currently available to the general public can recreate or alter photographic images to the point where they are virtually indistinguishable from the real thing. Consequently, anyone whose personal or professional life is in any way susceptible to such malicious “deepfakes” should consider themselves potentially at risk for this kind of tampering. To be clear, that includes (but is not limited to) anyone who has ever done something to irritate, offend, or perhaps elicit feelings of envy or jealousy from another person, such as a former spouse, lover, friend, partner, colleague, business competitor, or indeed absolutely anyone whose interests can be advanced—or gratified—by the creation of such images.
As Edwards demonstrates through multiple visual examples for Ars Technica, as few as five images can be “learned” by AI-image generation modeling now publicly available so as to effectively create an entirely illusory narrative about any targeted individual. Such images can be culled from a social media account or taken as individual “frames” from videos posted anywhere online. As long as the image source is accessible, whether through those dubious “privacy” settings or through any other means, the AI model can go to work on it in any fashion its user pleases. For example, as Edwards explains, images can be recreated depicting realistic criminal “mugshots” or illegal and lewd activity, and then easily and anonymously posted to an employer, a news outlet, or in a grade school chat room on TikTok. Edwards’ team used open-source AI image models Stable Diffusion and Dreambooth to “recreate” photographs of an artificially generated image test subject (named “John”), including one depicting “John” semi-nude in front of a children’s playground set, “John” dressed as a clown and cavorting in a local bar, or “John” standing naked in his empty classroom, just before his students file in.
As Edwards reports:
Thanks to AI, we can make John appear to commit illegal or immoral acts, such as breaking into a house, using illegal drugs, or taking a nude shower with a student. With add-on AI models optimized for pornography, John can be a porn star, and that capability can even veer into CSAM territory.
We can also generate images of John doing seemingly innocuous things that might still personally be devastating to him—drinking at a bar when he's pledged sobriety or spending time somewhere he is not supposed to be.
Significantly, the only reason an artificial construct was used by Edwards’ team was that a real-life “volunteer” was ultimately reluctant to allow their own altered online images to be published, because of privacy concerns.
AI modeling technology is evolving to the point where it is virtually impossible to distinguish such images from real ones. Protections such as legally mandating an invisible, digital “watermark” or other surreptitious labels in these types of artificially generated images are among suggestions Edwards describes to mitigate the abuse of this technology, but, as Edwards explains, even if such “fakes” are ultimately detectable, the potential for irrevocable damage to someone’s personal or professional reputation still exists. In other words, once a school-age child is maligned in this fashion, it makes precious little difference to them if the so-called “photos” are later proven to be fake.
Large technology companies responsible for the creation of AI modeling software have been criticized for failing to acknowledge the potential human costs that come with the mainstreaming of this technology (particularly its reliance on datasets that incorporate racist and sexist stereotypes and representations). And, as Edwards notes, commercially available AI deep learning models have already generated consternation among professional graphic artists whose own copyrighted work has been scraped by AI to create images for commercial use.
As to the potential human consequences of the malicious misuse of AI, Edwards believes women are especially vulnerable.
Once a woman's face or body is trained into the image set, her identity can be trivially inserted into pornographic imagery. This is due to the large quantity of sexualized images found in commonly used AI training data sets (in other words, the AI knows how to generate those very well). Our cultural biases toward the sexualized depiction of women online have taught these AI image generators to frequently sexualize their output by default.
Faced with such a paradigm-shifting, potentially devastating intrusion on their privacy, people will doubtlessly rationalize that it is unlikely to happen to them. That may very well be true for most, but as such technology becomes more available and easier for non-technical types to employ, it’s hard not to imagine the potential for social disruption it entails. For those who believe they might be at special risk, one solution that Edwards suggests “may be a good idea” is to delete all your photos online. Of course—as he acknowledges—that is not only personally out of the question for most people (due to their addiction to social media), but for many it’s also impossible as a practical matter. Politicians and celebrities, for example, whose photographs have been posted all over the internet for decades—and whose visibility makes them natural targets for such “deepfakes”—are likely to be the first ones forced to deal with the issue, as this technology becomes more and more widespread.
Of course, there’s always the possibility that we ultimately become so inured to these intrusions that they lose their effectiveness. As Edwards suggests:
Another potential antidote is time. As awareness grows, our culture may eventually absorb and mitigate these issues. We may accept this kind of manipulation as a new form of media reality that everyone must be aware of. The provenance of each photo we see will become that much more important; much like today, we will need to completely trust who is sharing the photos to believe any of them...[.]
Unfortunately, “trust” is a commodity in very short supply, particularly in the politically and socially polarized environment we currently live in, where people tend to believe whatever fits with their predispositions. It seems fitting that the very existence of social media and the carefully filtered “bubble” mentality it fosters is likely to be the greatest enabler to this type of unwanted invasion, only the most recent example of the privacy we all sacrificed from the moment we first “logged on.”