I must admit I have a sneaking envy of right-wingers. They seem to have so much time on their hands!
As reported by Mack Degeurin, for Popular Science:
Facing bias accusations, Google this week was forced to pause the image generation portion of Gemini, its generative AI model. The temporary suspension follows backlash from users who criticized it for allegedly placing too much emphasis on ethnic diversity, sometimes at the expense of accuracy. Prior to Google pausing services, Gemini was found producing racially diverse depictions of World War II-era Nazis, Viking warriors, the US Founding Fathers, and other historically white figures.
[***]
The posts quickly attracted the attention of right-wing social media circles which have taken issue with what they perceive as heavy-handed diversity and equity initiatives in American politics and business. In more extreme circles, some accounts used the AI-generated images to stir-up an unfounded conspiracy theory accusing Google of purposely trying to eliminate white people from Gemini image results.
(emphasis supplied)
Here are some of the AI-generated images (as noted in Degeurin's article) that apparently have right-wingers so discombobulated.
This is apparently such a concern that Ross Douthat of the New York Times has a column about it today, titled “Should We Fear the Woke AI?”
Let’s be clear, Google’s AI model is most certainly creating some inaccurate representations here. Nazis were most assuredly White People. The whitest, in fact! And yes there are myriad examples of AI images automatically yielding racially biased and gender biased results excluding people of color and other people typically discriminated against. Those groups can point to systemic, actual harm from such discrimination. Guess what? White people ain’t among those groups.
As Degeurin notes:
Though the controversy surrounding Gemini seems to stem from critics arguing Google doesn’t place enough emphasis on white individuals, experts studying AI have long said AI models do just the opposite and regularly underrepresented nonwhite groups. In a relatively short period of time, AI systems trained on culturally biased datasets have amassed a history of repeating and reinforcing stereotypes about racial minorities. Safety researchers say this is why tech companies building AI models need to responsibly filter and tune their products. Image generators, and AI models more broadly, often repeat or reinforce culturally biased data absorbed from its training data in a dynamic researchers sometimes refer to as “garbage in, garbage out.”
Sorry, but not every AI technical glitch (or programmer bias, if that’s actually what it is) is a “conspiracy.” I’m thinking the complainers here are the same types of people who blow a gasket when they see a multiracial couple in a TV commercial.
But, honestly, who spends their time asking an AI model to generate pictures of Nazis anyway? Is this really what right-wingers sit around doing? Waking up, eating breakfast, then plopping down with their laptops on the couch and saying, “make me a Nazi, Gemini!”
I guess so. What a life.
As Degeurin notes:
Users on X, formerly Twitter, began sharing screenshots of examples where Gemini reportedly generated images of nonwhite people when specifically prompted to depict a white person. In other cases, Gemini reportedly appeared to over-represent non-white people when prompted to generate images of historical groups that were predominantly white, critics claim.
Oh please. Let’s share our screenshots on Twitter and create a controversy! But then these allegedly offended folks actually take it a step further, firing off a complaint to Google about its model’s “inaccuracy.” Because DEI, diversity bad, blah blah blah.
From Degeurin's article, apparently they didn’t like these AI images either.
Not real clear why anyone would ask Google’s AI to generate an image of the King of England to begin with. If they were actually interested in finding out who the King of England was, couldn’t they simply go to Google images and type in the words, “King of England?” Considering the homogeneity of that search result, it’s tough to take seriously the claim that Google is “biased” against whites.
Of course, if you had a lot of spare time to do nothing but goof around in front of a screen finding things to whine about, it might seem like a "conspiracy."(And to be clear, there were actually Black Kings of England). Just don’t tell me some right-winger really planned to create a game including a King Charles skin/avatar and now he’s beside himself with frustration because he can’t generate a good image. Not buying it.
Google (predictably) scrambled to apologize for the “inaccurate” representations.
As Degeurin notes, this whole teapot tempest has a certain hollow ring to it.
Neither Krawczyk nor Google immediately responded to criticisms from users who said the racial representation choices extended beyond strictly historical figures. These claims, it’s worth stating, should be taken with a degree of skepticism. Some users expressed different experiences and PopSci was unable to replicate the findings. In some cases, other journalists claimed Gemini had refused to generate images for prompts asking the AI to create images of either Black or white people. In other words, user experiences with Gemini in recent days appear to have varied widely.
But just so there’s no further misunderstanding for those who inhabit “right-wing social media circles,” let’s review: Nazis were very, very White. So were practically all of the Vikings and Founding Fathers, and (most) of the Kings of England. They were just like you! Happy now?
There! Now you right-wingers have something important to share until the next conspiracy.
I’m out to dinner tonight, so hope everyone is doing well and staying warm.