This morning’s Seattle Times reprinted an article from the Washington Post. I grabbed the title they put on the front page, but the actual article title (page A2) is more accurate:
Twitter finds its algorithms amplify conservative content more
I think we can now take all the whining by the GOP about social media favoring us socialists, and toss it down the outhouse hole to keep proper company with the rest of the “conservative” talking points.
Short story — Twitter researchers analyzed millions of 2020 tweets by elected officials in seven countries, as well as links to political content from news outlets. They had outside experts classify the content as right or left leaning — to avoid appearance of bias on their part. Conclusion:
“Our results reveal a remarkably consistent trend: In 6 out of 7 countries studied, the mainstream political right enjoys higher algorithmic amplification than the mainstream political left,” the researchers said in a 27-page report.
The article mentions the “black box” problem with machine learning — after the algorithm has crunched along for a while, the people who wrote it actually have very little way of determining what it “learned” and how subsequent decisions about it’s ratings (that drive what people are more likely to see) are being made.
My guess as a former SW engineer is that there was almost certainly an initial bias in the system to select topics based on links and likes (that’s where the eyeballs that mean money come from) — and since controversy tends to drive interest — the amplification of topics that got a lot of attention for ANY reason took over. A machine algorithm doesn’t care about truth — starting as a blank slate, it would only look at volume.
Now the research team is planning to embark on “root cause analysis” to figure out changes to “reduce adverse impact”. Unfortunately the head researcher, Rumma Chowdhury, kind of stepped in it with this statement:
“Algorithmic amplification is not problematic by default — all algorithms amplify,” Chowdhury said in the blog post. “Algorithmic amplification is problematic if there is preferential treatment as a function of how the algorithm is constructed versus the interactions people have with it.”
If this really was machine learning, both the algorithm construction AND the interactions people have with it will affect how it learns to apply preferential treatment to certain messages.
ETA: message → messages in last paragraph
Sunday, Oct 24, 2021 · 3:49:12 PM +00:00
·
DButch
Today, the Seattle Times has reprinted an article from the New York Times about Facebook’s serious problems with disinformation. Their internal researchers were finding high levels of disinformation (lies) being posted about the election, and it only got worse after the election after they reduced their efforts to deal with the problem.