Not only did CNET shadily pass off AI generated text as staff written material, that material seems to have been a plagiarist as well:
Now, a fresh development may make efforts to spin the program back up even more controversial for the embattled newsroom. In addition to those factual errors, a new Futurism investigation found extensive evidence that the CNET AI's work has demonstrated deep structural and phrasing similarities to articles previously published elsewhere, without giving credit. In other words, it looks like the bot directly plagiarized the work of Red Ventures competitors, as well as human writers at Bankrate and even CNET itself.
Jeff Schatten, a professor at Washington and Lee University who has been examining the rise of AI-enabled misconduct, reviewed numerous examples of the bot's apparent cribbing that we provided. He found that they "clearly" rose to the level of plagiarism.
CNET's AI Journalist Appears to Have Committed Extensive Plagiarism (futurism.com)
This is not surprising. These systems produce their content by taking in an enormous amount of data and essentially trying to predict what words follow what other words in a given context. It is unsurprising that such a system would regurgitate large chunks of the material on which it was trained, especially in specialized areas.
This is likely to be an issue for these kinds of systems for a long time going forward. There is already one lawsuit against one of the AI art programs claiming that because it essentially takes the artists' works and chunks it up for reuse, it is a copyright violation. The proponents of this art like to claim that the machine is learning just like anyone else would, but this is not true. The machine is copying and reusing the pixels form the originals. Does that constitute fair use? Is it transformative enough? Maybe. Maybe not -- if you copy all of an artist's work, then say, "produce me a picture in the style of Artist A" and use the elements of the copied work to do so, that doesn't seem very transformative to me. And it is certainly not learning in any meaningful sense of the word.
AI systems are not intelligent. They are learning, they are merely taking and regurgitating, in sometimes not especially unique ways. By using the language of actual education, the creators and private money behind these systems deliberately obscure what they are, what they actually do and the real questions of ownership, fairness and compensation that these systems demand be answered. CNET quite literally stole other people's works and used it without payment in a manner that can only be described as plagiarism. Pretending that the machine learned something in the process is just so much nonsense designed to once again take money for the people who create and put it into the pockets of the people who own capital.
As a society, we need to do much better than simply accepting the word of capital that what it is doing is allowed, ethical, and just. Especially as they prove to us, in real time, that it is very often not.