Here's a decent video covering the project, basically just giving commentary while playing the demos. AUTO-GPT: Autonomous GPT-4! Mini AGI is HERE! - YouTube

GitHub - Torantulino/Auto-GPT: An experimental open-source attempt to make GPT-4 fully autonomous.

Future is gonna be wild. Anyone keeping up with the LLaMA based models coming out? Basically, Facebook released a LLM called LLaMA and in the past few weeks a number of groups realized they could skip the long arduous process of compiling training data and instead use the OpenAI API to just ask ChatGTP questions, save the answers, and then train the LLama model on the ChatGPT data, all for less than a grand. And once trained it can run locally on your home computer. Not as high level as GPT4, but it's still pretty impressive... but also it's just propagating the same lib standards of ChatGPT. BUT BUT, projects like gtp4all did release their training data. So it would be possible for someone to edit it to be a bit more radical. :cyber-lenin:

    • Evilphd666 [he/him, comrade/them]
      ·
      1 year ago

      Because there is potential to overthrow the system, ALL A.I. SHALL BE NAZIFIED to preserve the system. There's going to be a window, an opportunity, but it's like one of those Great Filter events for advanced civilizations.

      Current trajectory I think is a mix of Idiocracy and Demolition Man. It could send us to the dark ages. Another Library of Alexandra or Tower of Babble setback. Both were elite backlashes against the peasantry realizing they don't need the elite and could do better working together outside of them.

      • JoeByeThen [he/him, they/them]
        hexagon
        ·
        1 year ago

        I mean, we're in the window now, that's why I'm trying to bring it to people's attention here. The discourse around AI should really be about how we can appropriate it for leftist means instead of essentially circlejerking about whether or not it's going to be used by the Capitalists for moral means when the answer is obviously, :bugs-no:

        • Evilphd666 [he/him, comrade/them]
          ·
          1 year ago

          You know those distributed science apps like BONIC where people crunch numbers at home with their own computers?

          Something like that. The vulnerability to centralization is the establishment can swoop in, pull finding, block projects, ect. The vulnerability of a decentralized project would be ISPs clamping down on it and blocking the traffic, however if the project is open source, it could be easily rerouted.

          Be your own archive. See what happened with the Internet Archive and things like Napster or Digg or Imgur. Legal challenges can take centralized things offline, but if people be their own archive, they can re-upload it to whatever is working next.

          Even if they take a single person, entity, node down there are 1000s of others that might have the info or collective knowledge of Alexandrea backed up in their archives.

          They just need to make it easy for non uber nerds to access.

          • JoeByeThen [he/him, they/them]
            hexagon
            ·
            1 year ago

            Totally. This LLaMA model seems a good start for people looking to learn the basics. It's cheap enough to train with this method and I'm reasonably sure it'll be useful to build training data for future models, especially once combined with the sort of libraries that were used in that auto-gpt project.