https://nitter.1d4.us/MKBHD/status/1668048839675092992

  • GorbinOutOverHere [comrade/them]
    ·
    1 year ago

    We keep finding out that there is cognitive processing going on in your digestive system

    literally just more neurons, so what, more information and sensory shit getting added to the mix but it's still basically an infinitely tangled algorithmic spaghetti that makes up your consciousness

    We’re not going to make a human-like mind

    you can just stop it there, I'm aware

    It’s so much, much more complex than a few logic gates on a rock

    A neuron is much more complex and can consequently do a lot more things, adding to the complexity of the signaling and processing and whatever else you wanna call it mix that eventually comes out as consciousness, but that doesn't mean that "a few logic gates on a rock" can't be configured to operate with a similar degree of complexity

    I don't think chatgpt is alive but I do think that at some point the systems underlying things like it will grow sufficiently complex that you could call it conscious and, again, like I said, have to resort to some real questionable arguments to say that biological life is different

    I would be Commander Data's friend and not allow him to be chopped up and studied just because his circuitry could be reduced in argument to "a few logic gates on a rock" :soviet-huff:

    • Frank [he/him, he/him]
      ·
      edit-2
      1 year ago

      I don’t think chatgpt is alive but I do think that at some point the systems underlying things like it will grow sufficiently complex that you could call it conscious and, again, like I said, have to resort to some real questionable arguments to say that biological life is different

      People are already failing the turing test against chatgpt, but as it stands it doesn't do anything like thinking. All it can do is produce series of letters based on statistical weights in it's training model. There's no awareness, no context, no abstraction, no cognitive process what so ever. It can't reflect, it can't evaluate truth values, it can't perform any cognitive processes at all. People talk about lying and hallucination, but both of those terms are deeply misleading and fundamentally misunderstand what the models are doing. It can't lie - it has no theory of mind. It has no awareness of itself, let alone anything else. It can't hallucinate either. It's not answering questions incorrectly or giving you the wrong information. It's not answering your questions at all. It's comparing the sequence of letters in your prompt to sequences of letters in it's training set and producing a statistically weighted sequence of letters based on that training set. There's no abstraction, there's no manipulation of concepts. There's no cognition. That's why the image plagiarism generators can't draw hands, or really any complex object except faces - hands come in so many different shapes that the model weighs the colors and shapes of fingers, but lacking abstraction it cannot recognize that all of those shapes represent the same conceptual object. It doesn't know that most hands have five fingers because it doesn't know what hands or fingers are. It doesn't know anything. It's a math problem that compares numerical values and produces outputs statistically similar to the input prompt.

      Personally i think trying to make thinking computers is stupid bazinga stuff. We already have fully functioning self aware machines capable of all kinds of extremely complex functions we mostly don't understand yet. We should work with what we've got instead of trying to recreate one of the most complex emergent systems in observable reality when we don't even understand it's basic operating principles. Augmenting existing brains, to me at least, makes a lot more sense than fussing around stratching crude drawings on rocks.