This is partially a repost of my comment in the news megathread but not really.

OpenAI just announced Sora, a tool for creating video from text, and the results are really fucking good (especially compared to state-of-the-art AI video generation tools), and this has me thinking about job security again.

Generative AI is already displacing workers.

A study surveying 300 leaders across the entertainment industry reports that three-fourths of respondents indicated that AI tools supported the elimination, reduction or consolidation of jobs at their companies. Over the next three years, it estimates that nearly 204,000 positions will be adversely affected.

The Concept Art Assn. and the Animation Guild commissioned the report, which was conducted from Nov. 17 to Dec. 22 by consulting firm CVL Economics, amid concerns from members over the impact of AI on their work. Among the issues is that concept artists are increasingly being asked to “clean up” AI-generated works by studios, lowering their billed hours and the pool of available jobs, says Nicole Hendrix, founder of the advocacy group.

“We’re seeing a lot of role consolidation and reduction,” Hendrix says. “A lot of people are out of work right now.”

According to the report, nearly 77 percent of respondents use AI image generators enabling, for example, individuals to upload landscape photos to virtual productions screens or speed up rotoscoping in postproduction. They have applications in 3D modeling, storyboarding, animation and concept art, among other things.

Generative AI displacing workers isn't some future hypothetical, it's something that's already happening right now, and as someone working in a field which is vulnerable to automation by AI tools, I'm really worried that OpenAI (or some other company) is going to create a new tool that just completely puts me out of a job.

Is anyone else worried for their job? Is there anything that can be done?

  • wopazoo [he/him]
    hexagon
    ·
    edit-2
    10 months ago

    It's not even actual AI, just an algorithm.

    How long can you really say this is the case? LLMs and diffusion models in 2024 are so much better in terms of capability than conventional algorithms that it's just a truism to say "it's just an algorithm".

    Like, technically, lions and tigers are just biochemical reactions and atoms, but if I'm getting chased by one, knowing that isn't going to bring me any solace.

    • wopazoo [he/him]
      hexagon
      ·
      edit-2
      10 months ago

      It only took a few years from shitty dall-e mini memes to photorealistic images. Even if AI isn't "good enough" (a standard that keeps on getting raised every time a new development comes out), what about in 2025? What about in 2026?

      Like, "AI can't draw hands" used to be a meme, but AI can draw hands now. AI used to not be able to form coherent backgrounds, but AI can form coherent and consistent 3D environments now. I think people have a right to be worried!

      GPT used to not be able to remember long conversations, but Gemini 1.5 can remember 1 million tokens now.

      This technology is getting better day by day, and it's showing no signs of stopping.