Yeah...

  • DiltoGeggins [none/use name]
    ·
    1 year ago

    And from what I understand, all they “learn” to do is predict what letter goes next. There’s still no cognitive process, no manipulation of symbols, no abstraction of concepts.

    Fascinating but not surprising I guess

    • Frank [he/him, he/him]
      hexagon
      ·
      1 year ago

      There's a lot of argument about this. I know some people who think it's manipulating concepts, it can abstract ideas, shit like that. But my hard counter is that the image generators can't draw hands. And the reason they can't draw hands is that they're incapable of abstraction. Despite sampling likely millions or hundreds of millions of images of hands the model has no awareness that all of those inputs are part of a class of objects we call "hands", and that most hands have similar attributes.

      We can look at a person with extra fingers, a person with fewer or missing fingers, a monkey, a robot, a crab, a space alien, and a snow man and we'll understand that whatever is at the end of the upper limbs, to a certain degree of difference, is a hand and has the attributes of hand - It manipulates and grasps objects, etc.

      If someone asks us how many fingers are at the end of a hand we know it's five, but we also know that James Doohan, despite having four fingers, still has a hand. "Hand" is an abstract object we can manipulate.

      But the plagiarism machine can't do that. All it does is reproduce variations of it's data set with no semantic understanding of that data set. It can't draw hands because in it's data set there are countless variations of hands, hands in all shapes, hands in all positions, hands of varying colors. We could look at all of those hands and recognize them as hands, and if asked to draw a hand in teh style of X we'd still give it five fingers. If we had more or fewer fingers we'd be doing it on purpose, knowing that we're deviating from the "ideal" hand object we understand.

      But the LLM can't abstract, it can't conceive of "hand". it just looks for statistical weights in it's data sets. Since hands are so variable the data set is a mess. There are trends in color, there are trends in lines that we would recognize as fingers. But the LLM just generates statistically likely color values. It doesn't know aht hands or fingers are, so it doesn't know that the human prompting it wants a hand with five fingers, etc. It just outputs a string of numbers that are statistically similer to it's training set.

      Idk if I'm explaining this well, but to me that inability to draw hands, and it's not just hands, is a silver bullet to the idea that these things think or manipulate symbols. Because it's not just hands, it doesn't recognize anything. When you look at the details of the images, the little things like buttons, jewelry, complex gadgets, they're almost always blobs of noise in roughly the right shape. It has no awareness that it's being asked to draw an abstracted object from a set of objects. It's just reproducing weighted data. It can do faces because there are a vast, vast number of faces in it's data set, probably far more than most other objects, and faces are very consistent in their shape and layout. So the probability that whatever nonsense it generates will be interpreted by human observers as a face is pretty high. But when you ask it to do something that isn't as consistently shaped and as massively represented in the set as faces it chokes.

      The tells I look for for plagiarism machine "art" are generally things like jewelry, buttons, anything that should be symmetrical. They're really bad at symmetry, presumably because they can't abstract and so aren't aware that the buttons on each side of a coat are the same object, or the same class of objects and should be similar in most respects. Jewelry too - It's so varied, and the machine isn't understanding that they're discrete objects made up of smaller objects, so it just outputs a blur that, if you actually look at it, isn't actually jewelry.

      Like maybe I'm wrong, maybe there is some weird totally alien process in there, but whatever it's doing, it's not doing anything like what we do. (Unless I am totally, completely wrong and just don't know enough to know I'm wrong, which would be really annoying).

      • DiltoGeggins [none/use name]
        ·
        1 year ago

        Like that old story about the monkeys if given enough time, and a typewriter with endless ribbon and paper (and bananas I guess) will randomly produce Shakespeare's works. Might take the monkey 10,000 years, but dangit they'll get it done. And of course, by then we'll have forgotten all context and imagine this could only have been done because they are actually Superior to us and we will begin worshiping them with bananas as the main form of adoration.... 🍌🍌🍌🍌🍌

        • Frank [he/him, he/him]
          hexagon
          ·
          1 year ago

          And as near as I can determine all this system does is give the monkeys a banana when they hit a key that, based on an analysis of shakespeare, is statistically likely to be next. And eventually the monkeys are trained to assemble words in ways that resemble shaespeare, but they're still monkeys with no idea what they're doing.

          • DiltoGeggins [none/use name]
            ·
            1 year ago

            We can perhaps hope that eventually they begin to learn from the experience. (in a strictly evolutionary way..) :P

            • Frank [he/him, he/him]
              hexagon
              ·
              1 year ago

              I don't think it's possible. The monkeys aren't monkeys, it's a prediction engine that decides what the next token - be it a letter, word, number, whatever - there's never any point in that process where it's going to start having self reference. It's a dead end. They're trying to work backwards from the end point of 6.5 billion years of brutal selection to re-create a process they don't understand.

                • Frank [he/him, he/him]
                  hexagon
                  ·
                  1 year ago

                  Yeah, I was reading a reply where some guy said he could be a turing machine if he had enough spare sheets of paper to work with and that's not how human working memory works. If we assume that a cow is a spherical object in a vacuum then sure, buddy, you can simlulate a turing machine. But in the real world your meatsack can only manage so much stuff in your head and eventually you'd reach a point where you would no longer be able to keep performing the tasks necessary to do your turning machine thing. that's one of the most important things computers have going - You can store shitloads of information in memory and hard storage without losing track of it

          • 0karin728 [any]
            ·
            1 year ago

            This is just the whole Chinese room argument, it confuses consciousness for intelligence. Like, you're completely correct, but the capabilities of these things scale with compute used during training, with no sign of diminishing returns any time soon.

            It could understand Nothing and still outsmart you because it's good at predicting the next token that corresponds with behavior that would achieve the goals of the system. All without having any internal human-style conscious experience. In the short term this means that essentially every human being with an internet connection now suddenly has access to a genius level intelligence that never sleeps and does whatever it's told, which has both good and bad implications. In long term, they could (and likely will) become far more intelligent than humans with, which will make them increasingly difficult to control.

            It doesn't matter if the monkey understands what it's doing if gets so good at "randomly" hitting the typewriter that businesses hire the monkey instead of you, and then as the monkey becomes better and better starts handing out instructions to produce chemical weapons and other bio warfare agents to randos on the street. We need to take this technology seriously if we're going to prevent Microsoft, OpenAI, Facebook, Google, etc. from accidentally Ending the World with it, or deliberately making the world Worse with it.

            • UlyssesT
              ·
              edit-2
              24 days ago

              deleted by creator

              • 0karin728 [any]
                ·
                1 year ago

                They're starting a dangerous arms race where they release increasingly dangerous and poorly tested AI into the public, while dramatically overselling their safety. Pointing out that this technology is dangerous is the exact opposite of what they want.

                You're playing into their grift by acting like the entire idea of AI is some bullshit techbro hype cycle, which is exactly what microsoft, openai, Facebook, etc want. The more people pay attention and think "hey maybe we shouldn't be integrating enormous black box neural networks deep in all of our infrastructure and replacing key human workers with them", the more difficult it will be for them to continue doing this.

                • UlyssesT
                  ·
                  edit-2
                  24 days ago

                  deleted by creator

                  • 0karin728 [any]
                    ·
                    1 year ago

                    What talking points then? I seem to be misunderstanding your criticism (or it's meaninglessly vague, but I'm trying to be charitable). What specifically have I said that you take issue with?

            • Frank [he/him, he/him]
              hexagon
              ·
              1 year ago

              It's not the chinese room problem, it's a practical limitation of the ChatGPT plagiarism machines. We're not talking about a thought experiment where the guy in the room has the vast, vast, vast amount of rules needed to respond to any arbitrary input in a way the chinese speaker will interpret as semantically meaningful output. We're talking about a machine that exists right now, that far from being trained on an ideal, complete model of chinese is trained on billions and billions of shitposts on the internet.

              Maybe someone will make a machine like that in the future, but this ain't it. This is a machine that predicts letters, has no ability to manipulate symbols, no semantic understanding, and no way to asses the truth value of it's outputs. And for various reasons, including being trained on billions of internet shitposts, it's unlikely to ever develop these things.

              I'm really not interested in speculation about future potential intelligent systems and AIs. it's boring, it's been done to death, there's nothing new to add. Right now I want to better understand what these things do so I can own my friends who think they're manipulating abstract symbols and understand the semantic value of those symbols.

              • 0karin728 [any]
                ·
                1 year ago

                Yeah, obviously. Current AI is shit. But it's a proof that deep learning scales well enough to perform (or at least somewhat consistently replicate, depending on your outlook) behavior that humans recognize as intelligent.

                Three years ago these things could barely write coherent sentences, now they can replace a substantial number of human workers, three years from now? Who the fuck knows, emergent abilities are hard to predict in these models by definition, but new ones Keep Appearing when they train larger and larger ones in higher quality data. This means large scale social disruption at best and catastrophe (everything from AI enabled bioterrorism to AI propaganda-driven fascism) at worst.

              • UlyssesT
                ·
                edit-2
                24 days ago

                deleted by creator

      • UlyssesT
        ·
        edit-2
        24 days ago

        deleted by creator