From the Rules of the Road, Community Guidelines section:
3. Strive to be accurate. Use trustworthy sources. Avoid clickbait and deceptive headlines. Take a moment to check your work, evaluate the source(s), and verify the facts to the extent possible. If commenters are skeptical, take their concerns seriously. If you find out you made a mistake, own it and correct the story, and add a note at the top explaining the change you’ve made. Don’t be a part of spreading misinformation, disinformation, or conspiracy theories.
Large Language Models (LLMs) like ChatGPT are notoriously inaccurate, routinely get basic facts wrong, can’t do math, and completely suck at logic. They’re also plagiarism laundering engines, steal IP from authors, and use energy like there’s no tomorrow, but that’s another set of issues. The main thing is they can't be trusted for anything. My contention is that posting anything written by an LLM like ChatGPT is already a violation of the above rule and should not be allowed.
A couple days ago someone posted a diary that was mostly “here’s a thing I asked ChatGPT and what it said”. It was not a good diary. But it got me worrying that we’re going to see more of that here, and I don’t think that’s a good idea. This site is supposed to be reality based.
We can’t allow people to post nonsense just because they used an LLM to write it. And it’s far too easy to create nonsense with an LLM, and harder to fact-check it for correctness than if you’d done the work to write it yourself. I think we might need a clarification in the Rules about using LLM-made content so diarists know what to avoid, and then enforce that.
Let’s keep DKos reality based. That means no LLM-written nonsense.