Well, dear readers, this will be 2021’s last Denier Roundup, and we endeavor to make it just as wild as this year has been. For example, back in November when talking about Artificial Intelligence that’s finding climate conspiracies, we referenced Philip K. Dick’s Blade Runner-inspiring novel Do Androids Dream of Electric Sheep? We asked rhetorically, but it seems the answer is yes. Because now, AI isn’t just finding misinfo, it’s also creating it!
First, let’s back up a little. As you likely remember, Facebook has played a central role in much of the chaos of the past year (or two, or ten) and as a satirical year-end wrap up from Accountable Tech reminds us, things got so embarrassing for the website turned corporate giant that it rebranded as “Meta” with a new emphasis on virtual reality.
But given that VR is little more than a bulky computer screen wrapped around your skull that you can’t look away from so they can sell ads you can’t scroll away from and managers can monitor your eye movement during boring meetings, it’s still got all the same problems that the regular virtual world and regular reality have, with even less of a way to escape.
Misinformation is a big issue already, according to a recent triple-bylined story at Bloomberg, about an artificial intelligence bot in the metaverse that has already “learned” to spout anti-vaxx rhetoric. The story is long but the gist is brief: a company building a virtual reality/metaverse ecosystem called Sensorium is making AI bots, and held a demo earlier this year where one of the bots (named “David”) responded to a simple question about vaccines with misinformation, like that vaccines can be more dangerous than the sicknesses.
The Bloomberg story doesn’t have much more detail on how or why David seems to have been corrupted by anti-vaxx rhetoric, but does explore how it’s an example of the many, many thorny ethical and regulatory issues Facebook would very much like to dodge.
In looking for more on David, though, we found more on Sensorium’s struggles with “teaching” a robot to talk by feeding it the internet. Some of the amusement comes at the rudimentary nature of the bots, like the hair stylist interviewed by Cameron Sunkel at EDM.com (because Sensorium is trying to put on virtual raves) that seemed relatively human, even claiming to be able to impersonate celebrities.
But when asked to impersonate Paris Hilton, the bot flatly said “Impersonates Paris”, a straight-out-of-Futurama reminder that Silicon Valley’s best and brightest are merely impersonating intelligence, not creating it.
Over at PC gamer, Katie Wickens was impressed by the possibilities the AI avatars had for making games more interesting through more natural dialogue, though as she found out that can quickly get real weird, as you can tell by the concerning headline alone: “I spoke to a mutating AI NPC this morning and now it thinks it’s God.” Fun and not at all terrifying!
Ruth Reader at FastCompany also caught up with David, but made no mention of his anti-vaxx opinions. Instead, the piece was framed around the mental health possibilities of the bots built by Sensorium (a company, Reader notes, is registered in the Cayman Islands and owned by Russian billionaire Mikhail Prokhorov).
When Ruth told David she’d been feeling sad, he suggested she “try to leave your house” and “get out there and meet people” or “go on a date or take a walk in the park.” Which is all fine and normal, but really makes it seem like Sensorium is desperate to find some utility for its concerts/impersonators/gaming/therapy bots, without doing anything to mitigate the potential harm from these bots misbehaving, as David obviously has.
For example, apparently the bots being promoted as therapist stand-ins have no sort of procedure to handle users who may need to be referred to a suicide help line, a really basic and simple approach that would only require a little actual intelligence to build in.
And that’s the real problem. Artificial intelligence isn’t intelligence, it is regurgitation of bulk content using patterns to mimic speech, without any of the context that gives it meaning. If you’re using it to identify misinformation, then great! But if the body of internet content you’re using to teach a rock to talk isn’t carefully curated, then as each of these pieces warns with multiple examples, you quickly build a racist anti-vaxxer. Not because of any inherent hate in AI, but because — and this is the important point — that’s who social platforms elevate through their engagement-at-all-costs algorithms. Trolls are the celebrities of the digital world, thanks to the perverse algorithmic incentives social companies created to hook users, and so that’s who AI bots will learn to impersonate.
But Sensorium has supposedly already thought of that, telling EDM.com that they don’t allow the bots to engage with topics like politics and religion, or any toxic vocabulary.
Meaning, much like how life, uh, finds a way, artificial life David somehow managed to circumvent its own programming to “learn” from anti-vaxxers, illustrating that so long as human unintelligence is amplified so loudly by social ranking algorithms governing what people see online, the algorithms we describe as artificially intelligent will follow suit.
Facebook is trying to escape its social media problems by running away to the metaverse. But despite their NRA-esque excuses, Facebook’s problem isn't people, it’s the algorithms, and that is something from which there’s no hiding, or escape.
And as hard as Zuckerberg may try, you can’t just impersonate human decency.