There is a question mark on the title because I am genuinely unsure how much of what I am about to discuss is deliberate on Amazon’s part, and how much is just benign neglect.
Wired has a piece on what appears to be a flood of AI generated summaries of non-fiction works using the same title as their victims. Some authors have been able to get such works removed, others aren’t sure they can. Given my normal proclivities at this little newsletter, you are probably assuming this is going to be a piece on another AI driven mess. And, yes, there will be a bit of that. But the more interesting question to me is whether or not Amazon believes such a world is the one in which Amazon will make it the most money.
Getting the AI aspect out of the way: yes this is exacerbated by AI. One of the few things large language models are good at is summarizing a text. My limited experience and the writing of others shows that such LLMs (what we commonly refer to as AI) can produce a D+, C- kind of summation of a work. There are obvious copyright issues with some of these. One author found an almost worked for word copy of her work in the summary, something that no court would claim is fair use. But even the concept of the summary only being commercially allowed is disputed — some experts thought they were fine, in a CliffsNotes kind of way. Some others, though, that that what made CliffsNotes work was the analysis component, something that these works do not appear to have. Lawyers will likely make some bank figuring this out if the trend continues.
But will the trend continue? Right now, Amazon will take down at least some of these if asked. But the article also notes that Amazon will not proactively prevent these kinds of works from going up. Now, part of that is likely expense — this would not be the easiest kind of thing to discover and prevent and would have to have an appeals process. And that means Amazon would have to pay people to deal with it, something that is anathema to Amazon. More interestingly to me, though, is if Amazon thinks these works make them more money than the headaches they cause. This might be, in fact, the world that Amazon wants.
One of the consistent, though not universal, pieces of advice about self-publishing on Amazon is that you need to publish a lot of books. Amazon’s algorithms push frequent writers and things like their Kindle Unlimited program (which pays per page read) has trained readers, then, to expect a lot of works in a very short period of time. As a consequence of that, these works tend to be at the large novella, short novel stage. Outputting 50,000 words of quality is easier to do than outputting 150,000 words of quality. AI produced works, then, feed into the general process that Amazon seems to want: fast, short works that readers will be encouraged to pick up and blow through on their way to the next piece. Self-published authors already discuss publishing multiple books a month with the help of LLM systems like ChatGPT. There is not much difference between those plans and these summaries.
The counterargument is quality: make as many jokes as you want, but readers won’t come back if there is no quality. Most successful self-published authors may not be producing the next Great American Novel (who is?) but they are producing works that people enjoy reading. If Amazon lets their marketplace get overrun with AI garbage, there is the real possibility that they will degrade the experience too much and drive readers away. The question then is, do they care?
Amazon already has a problem with faulty and counterfeit products, but they haven’t done a ton to correct that issue. They seemingly do not need to, as they are effectively an online monopoly in significant portions of the world. Quality issues, then, haven’t hurt their overall bottom line much, if at all. Reading might be different, though. It is easier to produce book-like content, not to mention much, much cheaper, than producing actual knock off products. And someone in the Kindle department is probably looking at Kobo’s success outside the US and Apple’s dormant but possibly massive advertising/goodwill advantage and thinking how easy it would be for a poor experience to scuttle a significant source of revenue.
This all goes back to systems: Amazon has created a world where their bread and butter for a certain kind of market has been trained to expect short works on an accelerated timeline. That, in turn, pressures authors to find shortcuts to producing those works on that timeline. Along comes LLMs like ChatGPT that can output book like material, even if the quality is almost always subpar. The incentives to produce a flood of that content are pretty obvious. But too much subpar material will eventually drive readers away and/or open a space for competitors to lure Amazon’s customers away. In the meantime, though, Amazon benefits from people buying the AI generated material.
I genuinely do not know if Amazon can wean itself off its current systems or if it will be happy to make short term money at the possible expense of long-term profit. Modern corporations, after all, are not incentivized to think past the next quarter. it would be ironic if Amazon’s self-publishing empire fell to an AI driven apocalypse perfectly tuned to exploit the system Amazon built that empire on in the first place.