Yeah...

  • DiltoGeggins [none/use name]
    ·
    1 year ago

    Like that old story about the monkeys if given enough time, and a typewriter with endless ribbon and paper (and bananas I guess) will randomly produce Shakespeare's works. Might take the monkey 10,000 years, but dangit they'll get it done. And of course, by then we'll have forgotten all context and imagine this could only have been done because they are actually Superior to us and we will begin worshiping them with bananas as the main form of adoration.... 🍌🍌🍌🍌🍌

    • Frank [he/him, he/him]
      hexagon
      ·
      1 year ago

      And as near as I can determine all this system does is give the monkeys a banana when they hit a key that, based on an analysis of shakespeare, is statistically likely to be next. And eventually the monkeys are trained to assemble words in ways that resemble shaespeare, but they're still monkeys with no idea what they're doing.

      • DiltoGeggins [none/use name]
        ·
        1 year ago

        We can perhaps hope that eventually they begin to learn from the experience. (in a strictly evolutionary way..) :P

        • Frank [he/him, he/him]
          hexagon
          ·
          1 year ago

          I don't think it's possible. The monkeys aren't monkeys, it's a prediction engine that decides what the next token - be it a letter, word, number, whatever - there's never any point in that process where it's going to start having self reference. It's a dead end. They're trying to work backwards from the end point of 6.5 billion years of brutal selection to re-create a process they don't understand.

            • Frank [he/him, he/him]
              hexagon
              ·
              1 year ago

              Yeah, I was reading a reply where some guy said he could be a turing machine if he had enough spare sheets of paper to work with and that's not how human working memory works. If we assume that a cow is a spherical object in a vacuum then sure, buddy, you can simlulate a turing machine. But in the real world your meatsack can only manage so much stuff in your head and eventually you'd reach a point where you would no longer be able to keep performing the tasks necessary to do your turning machine thing. that's one of the most important things computers have going - You can store shitloads of information in memory and hard storage without losing track of it

      • 0karin728 [any]
        ·
        1 year ago

        This is just the whole Chinese room argument, it confuses consciousness for intelligence. Like, you're completely correct, but the capabilities of these things scale with compute used during training, with no sign of diminishing returns any time soon.

        It could understand Nothing and still outsmart you because it's good at predicting the next token that corresponds with behavior that would achieve the goals of the system. All without having any internal human-style conscious experience. In the short term this means that essentially every human being with an internet connection now suddenly has access to a genius level intelligence that never sleeps and does whatever it's told, which has both good and bad implications. In long term, they could (and likely will) become far more intelligent than humans with, which will make them increasingly difficult to control.

        It doesn't matter if the monkey understands what it's doing if gets so good at "randomly" hitting the typewriter that businesses hire the monkey instead of you, and then as the monkey becomes better and better starts handing out instructions to produce chemical weapons and other bio warfare agents to randos on the street. We need to take this technology seriously if we're going to prevent Microsoft, OpenAI, Facebook, Google, etc. from accidentally Ending the World with it, or deliberately making the world Worse with it.

        • UlyssesT
          ·
          edit-2
          24 days ago

          deleted by creator

          • 0karin728 [any]
            ·
            1 year ago

            They're starting a dangerous arms race where they release increasingly dangerous and poorly tested AI into the public, while dramatically overselling their safety. Pointing out that this technology is dangerous is the exact opposite of what they want.

            You're playing into their grift by acting like the entire idea of AI is some bullshit techbro hype cycle, which is exactly what microsoft, openai, Facebook, etc want. The more people pay attention and think "hey maybe we shouldn't be integrating enormous black box neural networks deep in all of our infrastructure and replacing key human workers with them", the more difficult it will be for them to continue doing this.

            • UlyssesT
              ·
              edit-2
              24 days ago

              deleted by creator

              • 0karin728 [any]
                ·
                1 year ago

                What talking points then? I seem to be misunderstanding your criticism (or it's meaninglessly vague, but I'm trying to be charitable). What specifically have I said that you take issue with?

        • Frank [he/him, he/him]
          hexagon
          ·
          1 year ago

          It's not the chinese room problem, it's a practical limitation of the ChatGPT plagiarism machines. We're not talking about a thought experiment where the guy in the room has the vast, vast, vast amount of rules needed to respond to any arbitrary input in a way the chinese speaker will interpret as semantically meaningful output. We're talking about a machine that exists right now, that far from being trained on an ideal, complete model of chinese is trained on billions and billions of shitposts on the internet.

          Maybe someone will make a machine like that in the future, but this ain't it. This is a machine that predicts letters, has no ability to manipulate symbols, no semantic understanding, and no way to asses the truth value of it's outputs. And for various reasons, including being trained on billions of internet shitposts, it's unlikely to ever develop these things.

          I'm really not interested in speculation about future potential intelligent systems and AIs. it's boring, it's been done to death, there's nothing new to add. Right now I want to better understand what these things do so I can own my friends who think they're manipulating abstract symbols and understand the semantic value of those symbols.

          • 0karin728 [any]
            ·
            1 year ago

            Yeah, obviously. Current AI is shit. But it's a proof that deep learning scales well enough to perform (or at least somewhat consistently replicate, depending on your outlook) behavior that humans recognize as intelligent.

            Three years ago these things could barely write coherent sentences, now they can replace a substantial number of human workers, three years from now? Who the fuck knows, emergent abilities are hard to predict in these models by definition, but new ones Keep Appearing when they train larger and larger ones in higher quality data. This means large scale social disruption at best and catastrophe (everything from AI enabled bioterrorism to AI propaganda-driven fascism) at worst.

          • UlyssesT
            ·
            edit-2
            24 days ago

            deleted by creator