Basically every time AI tries to create its own thing, it's incrementally shittier than whatever it trained on. As more and more AI - produced content floods the internet, it's increasingly training on AI - generated material. The effect is analogous to scanning and printing the same document over and over again, where it ultimately becomes a blurry mess. AI cannot create on its own, it can only modify pre-existing human work.

The article's main solution is to keep some kind of master backup of work labelled as existing before the rise of LLMs, but isn't optimistic of this actually happening. I'm wondering if in a few years the "write TV script" button on chatGPT generates completely unworkable garbage, will studios stop trying to pretend it's a viable replacement for writing staff?

  • Frank [he/him, he/him]
    ·
    1 year ago

    Yeah but they'll just make a plagiarism bot that fake video. They've already got plagiarism bots that fake the steps of drawing an image in reverse.

    • ssjmarx [he/him]
      ·
      1 year ago

      I think there will be an "arms race" between the generators and the verification methods, but the speedrun community for example has been dealing with this exact problem for a while and the methods of spotting fake runs are really sophisticated for the most popular games. At the very least you can ask an artist technical questions and 90% of cheaters will get weeded out because they won't be able to talk about their process.