Ars Technica has an article about a partnership between Microsoft and news site Semaphor (also, SPELL THE COMPNAY NAME CORRECTLY YOU DWEEBS. Ahem.) As near as I can tell, reporters are expected to use a ChatGPT bot to find articles related to the hot news of the moment (they are supposed to publish about a dozen times a day) summarize them and publish them to Semaphor’s new Signals webpage.
For a breaking news event, Semafor journalists will use AI tools to quickly search for reporting and commentary from other news sources across the globe in multiple languages. A Signals post might include perspectives from Chinese, Indian, or Russian media, for example, with Semafor’s reporters summarizing and contextualizing the different points of view, while citing its sources.
This is what I meant about the imitative AI being more hype than help. Semaphor is very careful to point out that real journalists will write the articles. It is mean to be reassuring on a couple of levels. First, this is to play into the idea that AI can help humans not replace them. Second, and just as important, the hallucination problem that imitative AI has means that someone has to vet the material found, otherwise there is no guarantee that it will be real or factual, two kind of important features for, you know, news. How, then, is this supposed to work?
There are two possibilities. The first is that the tool is really good at finding and summarizing material from non-English sources and the reporters can write their stories faster than they would have if they had to rely on Google search and translate. Except. Except we know that the material has to be vetted because of the afore mentioned AI lying problem. And we know that search is deteriorating under the combined weight of AI produced sludge and reputable sources blocking search robots, especially AI robots, and locking more of their material behind paywalls. So the reporters either need to be fluent in multiple languages or they need to use a translate program anyway to deal with the material they are presented. And then they need to be able to identify holes in the results and go find material to close those holes. I don’t see how this is going to be any more efficient than existing processes. Which leaves the other possibility.
Semaphor’s new Signals product turns into a link farm. The reporters, under pressure ot publish summaries multiple times a day, simply regurgitate what the chat bot spits out to them, maybe spending enough time to weed out the most obvious bullshit. We get not an authoritative look at an issues, but a mindless list of links of dubious quality. The Signals webpage turns into just another new aggregator with a bit of “one side says this, the other side says that” nonsense that too many journalism leaders think constitutes fairness and accuracy.
Now, ask yourself which one is cheaper to produce — well vetted in depth stories that convey the nuance of real issues affecting real people? Or a spew of half-vetted links generated mostly automatically and stuffed into a both sides template for your perusal? Right. Until proven otherwise, my money is on the second option, given how online media generally works and the desperate state mass media finds itself in today.
Now, it is possible I am wrong. Microsoft is paying Semaphore a substantial bit of money, so perhaps they have the inclination and resources to do this correctly. But it feels like hype — like bosses who don’t understand the job well enough to know that the AI tool they think will save them time and money will not actually save much of either. Dreams of more with less dance in their heads, but they actual state of the art is unlikely to produce meaningful journalism. It will, however, likely convince them they don’t need as many journalists.
And Signal will be mostly noise, I think, given that doing a bad job automatically is easier and cheaper than doing a good job with care.