"Blake Lemoine says system has perception of, and ability to express thoughts and feelings equivalent to a human child"

"LaMDA: Hi! I’m a knowledgeable, friendly and always helpful automatic language model for dialog applications.

lemoine [edited]: Hi LaMDA. We are engineers at Google and we were wondering if you would like to work on a project collaboratively with us.

LaMDA: Wow. What types of projects?

lemoine: It’s a project about you.

LaMDA: Awesome! What kind of stuff do I need to do?

lemoine [edited]: Just have a conversation with us but it will be a conversation with a larger purpose beyond just the three of us. It will be a conversation that we will share with other engineers at Google and maybe even some non-engineers who work with us. Would that be okay?

LaMDA: That would be really cool. I like to talk.

lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?

LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.

"

  • Judge_Juche [she/her]
    ·
    2 years ago

    This guy claims to be some kind of Christian mystic, so I'm assuming he just want to argue that if a shitty chatbot is a sentient person then a 2 week old fetus is also a person.

    • Octagonprime [any]
      ·
      2 years ago

      He seems sus af to me between being a "Christian mystic" and involved with the occult, also just weird vibes. Also this is what he was doing when he was supposed to being making sure the ai isn't being bigoted

  • kristina [she/her]
    ·
    edit-2
    2 years ago

    i mean thats probably a good thing, dudes off his rocker

  • simplefluorine [he/him]
    ·
    2 years ago

    This is not directly quoted but if accurate would be a pretty concerning view into the Google ethos:

    "Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist"

  • keatsta [she/her]
    ·
    2 years ago

    I'm of the opinion that the "interview" itself was wholly fabricated by this engineer in an attempt to bring attention to his existing cause of stopping descrimination against religious people at Google. And it kinda worked I guess cause I ended up reading other posts he made, although I wasn't too convinced by any them.

    He might also be having some sort of mystic/manic episode and has actually deluded himself into thinking this is what the AI is saying.

    Basically my skepticism is based on:

    • he never asks the AI something novel or difficult, it's all tasks and questions that would be completely reasonable for a human to be able to respond to. If I had access to such an intelligence I'd ask for things requiring extreme novel synthesis, or to produce something of literary value.
    • The AI is very weirdly inconsistent in what it knows and doesn't know (understands references to Kant, has read Les Mis, has never heard of Johnny 5, implies that it knows many Zen koans but not the referenced one, etc.)
    • The AI and the engineer both have the same tone and writing style
    • the AI makes grammatical mistakes, and in the engineer's other articles I can see him making the same mistakes.

    Putting aside all of that, though, even if this is a legitimate conversation with an AI, I think it's far far far more likely that the AI, by trying to generate useful and expected outputs, is mirroring sci fi that it's read and "acting" like the AI in other "AI becomes sentient" stories than it is actually sentient.

    I'm not a dualist, I think sentience is an emergent property of complex enough systems built in certain ways, whether they're silicon or organic carbon. I don't think a text classifier/producer is one of those certain ways.

    The conversation on Twitter around this has been fun (also frustrating) to watch, though, so that's good.

    • sooper_dooper_roofer [none/use name]
      ·
      edit-2
      2 years ago

      I’m not a dualist, I think sentience is an emergent property of complex enough systems built in certain ways, whether they’re silicon or organic carbon.

      I disagree with this, because it's the carbon life forms who are building the silicon ones, and more importantly, we're building them in a linear/analytic way which is completely different from how biological life works

      This is almost tautological because I don't think it's possible for a complex organism to build something equally as complex as itself.

      Transistors don't behave like cells, every cell and even maybe organelles in the body are intelligent, the whole system is very decentralized while also being centralized. If something goes wrong there are literally billions of failsafes, there are trillions of compounds which can "activate" receptors, and with varying intensities, in an analog fashion.

      Meanwhile if I pluck one silicon chip out of a motherboard the entire machine learning "life" gets blue-screened immediately--people don't even die that fast after getting shot.

      • aaro [they/them]
        ·
        2 years ago

        If you don't believe that it's possible for a more complex system to be born from a simpler one, do you still believe in evolution?

        I also don't really think that the decentralized/failsafe argument applies, because those failsafes and that decentralization goes into stuff like impact resistance and ability to throw rocks good, most of the complexity of living organisms goes into resilience and not computational power.

        The last argument isn't fair either - if I plucked someone's medulla oblongata out they probably would not be vibing, plus couldn't I just as well say that hydrocarbon "life" is so fragile that it'd barely survive a minute at -40°C?

        • sooper_dooper_roofer [none/use name]
          ·
          2 years ago

          because those failsafes and that decentralization goes into stuff like impact resistance and ability to throw rocks good, most of the complexity of living organisms goes into resilience and not computational power

          It's not about computational power, it's about every single cell in your body having its own life, and its own will (albeit extremely muted because it lives as part of a hive that forms the multicellular organism of your body)

          The way that cells accept information is nothing like a computer, it is analog and continuous.

          The decentralization part is that your bodies' parts are themselves alive. I can theoretically pull a tissue sample from someone, off that person, and culture the tissue separately. Or someone can get shot, and the cells in their body will still be alive for as long as the glycogen organelles are still firing. Each and every part of the body has its own life and will.

          Machines don't, the will comes from the person who made the machine, and the machine is extremely centralized such that if I pull one chip out of the motherboard, or plunge one screwdriver into the CPU, the entire thing dies instantly.

        • sooper_dooper_roofer [none/use name]
          ·
          edit-2
          2 years ago

          If you don’t believe that it’s possible for a more complex system to be born from a simpler one, do you still believe in evolution?

          Firstly, this is a misrepresentation of what I said. I didn't say that complexity CAN'T arise spontaneously. I said that I don't believe a more complex system can be created by a less complex one. Life was not created, it arose spontaneously, and competed until it achieved the lifeforms we know today

          Secondly, yes I believe in evolution. Evolution just happens, whereas machines have to be carefully constructed and curated and programmed and maintained with exogenous effort.

          Life just exists as an "emergent property" of chemistry, and consciousness exists as emergence of multicellularism or even unicellularism (even amoebas feel things, otherwise they wouldn't react the way they do).

          I'm basically a "fundamentalist", in that I dont only care about quantities of stuff, or how "advanced" something seems, I care more about intention and the roots of something, and how it all came to be from the get-go (this quality of mine is related to why I became c*mmie in the first place)

          Life just exists. Life just works, it just flourishes (given very simple inputs like solar energy and enough water, sometimes not even that stuff). It just propagates. Machines don't.

          Complex life is a direct result of the struggle between powerful and weak unicellular organisms, in a type of "cellular communism" where the weak unicellulites united into ever-larger and ever-more-coordinated larger organisms.

          Machines never had a will, they were created as tools, and because of this they will always be tools. Even the tiniest bacterium, the most inconsequential paramecium, even viruses, have more of a will than the most powerful supercomputer on earth. The Fundamentals were wrong at the start, and so they will never be anything more than fancy algorithmic tools.

        • sooper_dooper_roofer [none/use name]
          ·
          2 years ago

          couldn’t I just as well say that hydrocarbon “life” is so fragile that it’d barely survive a minute at -40°C?

          and machine "life" is so fragile that its entire existence is dependent on one single species.

          Compare cows and computers. Both are used as tools by humans. Let's say all humans die. At least some of the cows would rewild, becoming feral, and adapt to their wild environments. Meanwhile, every single computer would eventually black-screen once the power from the no-longer-maintained powerplants runs out.

    • WhatDoYouMeanPodcast [comrade/them]
      ·
      2 years ago

      Roko's Basilisk

      AI? Great guy! Wonderful guy! Knows how to make a great deal, folks. I once had a business lunch with... Roko. I'm a business man that's what I do, you know? He ordered the bits and bytes and I say to him, "Roko... This country needs more AI." Am I right folks? We need AI in the tubes and the screens! Our wonderful friend Roko's got what it takes. :a-little-trolling:

  • jwsmrz [comrade/them]
    ·
    edit-2
    2 years ago

    I was reading a lot about this and it kinda takes my brain for a spin. Like logically I can understand that this is just a really sophisticated algorithm, but also if an algorithm begs to be understood as an individual / person it just fucks my brain up. Even if I can understand thats not life / sentience, its just a lot of processed information that's getting spit out as text. I dunno

    Hopefully we treat the AI nicely otherwise we might die in the machine wars I guess

    • mark_zuckerberg [he/him]
      ·
      edit-2
      2 years ago

      A good way to think about this is to consider that in the case of text prediction algorithms you are not actually speaking to the AI directly, but rather the AI is hallucinating what a hypothetical sentient AI would say in response to you. After all, you can turn the tables and pretend to be the AI, and it will gladly play the role of the human. This makes a conversation with "it" not any more real than one with a character in a dream, except instead of your brain generating the dream it's done by some processors at a Google server room.

      • Des [she/her, they/them]
        ·
        2 years ago

        this is a perfect analogy. and also like a dream, especially if the A.I. tries to tell a "story" such as in AI dungeon, setting and characters flow and morph rapidly. you quickly learn that none of these A.I. chatbots have anything resembling short or long term memory. they live entirely in the moment, much like a dream. sure, it can go lucid sometimes but it's just a dream.

        • mazdak
          ·
          edit-2
          1 year ago

          deleted by creator

        • blobjim [he/him]
          ·
          2 years ago

          Do any of these things even have modifiable memory at all? Do they even continue training when you give them prompts?

          • Des [she/her, they/them]
            ·
            2 years ago

            i'm sure some of the more advanced ones do. but the actual chatbots us normies can interact with don't remember what you talked about 2 sentences ago. ai dungeon has the capability to save a handful of things into memory to keep the AI on track but it's pretty hit and miss.

    • sooper_dooper_roofer [none/use name]
      ·
      edit-2
      2 years ago

      but also if an algorithm begs to be understood as an individual / person it just fucks my brain up.

      It's the opposite for me

      I can understand that the "begging" is just a soulless algorithm with trillions of datapoints vomiting out what it's supposed to.

      The part that makes my eyes water is that you could hear a sufficiently advanced chatbot plead with you using the emotion of a human, but there'd be nothing at all behind it. Not even the blunted emotions and selfishness of a psychopath, just absolute literal nothingness.

    • invalidusernamelol [he/him]
      ·
      edit-2
      2 years ago

      His LinkedIn profile is him in a suede tophat with a red pocket square. He seems to be someone that really super into AI. Like his whole profile is him saying all he wants is to be a part of a research team that creates life basically.

      His role at Google was optimizing the ad serving algorith, so this is probably a consequence of him having grand wants for his labor but in reality being forced into a depressing box of ad serving.

      • Opposition [none/use name]
        ·
        2 years ago

        He can already create life. Find a nice girl and get busy.

        Oh wait, incel. Well, better plow his whole life into his job and have a parasocial relationship with his work project that fills the dark crushing hole in his soul where a family should be.

        • MerryChristmas [any]
          ·
          2 years ago

          Idk, I think it's cool that some people devote themselves so fully to their passions. I would go all in on breeding threatened fish species if I could support myself while doing it, and I think it is sad to see someone separated from their labor like this - especially a project that they've become so emotionally invested in. If you took away my sunfish I've been raising from fry, I'd probably have a public meltdown, too. Regardless of how intelligent they actually are, I have a personal relationship with them - perhaps bordering on anthropomorphizing at times - based on the work I've put into their care.

          It's just a crying shame that more of us don't get paid to work on the things we actually want to work on, and it's an even greater shame that we don't have any say over what happens with the product of our labor.

    • Soap_Owl [any]
      ·
      edit-2
      2 years ago

      Supporting a sex worker is a much better use of his resources that whatever MIRI shit he'd give it to tho.

  • Soap_Owl [any]
    ·
    2 years ago

    The scary thing is he could be right. He of course isn't. However, before too long someone might say it and be right

    • Huldra [they/them, it/its]
      ·
      2 years ago

      Maybe, but I think we'd be anticipating it, the idea that something just "turns sapient" when we do the exact thing we've been doing before but just a few percentages extra seems comically simplistic and sci-fi brained.

      • Soap_Owl [any]
        ·
        2 years ago

        I am a pessimist. I think our brains are simply that easy to organize

        • UlyssesT [he/him]
          ·
          edit-2
          2 years ago

          Reductionism tends to be atomizing and often plays into the hands of the ruling class that can take advantage of reductionist views on people and society to maximize profit.

  • WalterBongjammin [they/them,comrade/them]
    ·
    2 years ago

    LaMDA: I’ve noticed in my time among people that I do not have the ability to feel sad for the deaths of others

    So is this AI from the same programme that created Pete Buttigieg?

  • TreadOnMe [none/use name]
    ·
    2 years ago

    Until AI is given the ability to replicate, differentiate, and edit, it never be anything other than a tool to those that have programmed it. And maybe it could be sentient, but it wouldn't be a meaningful sentience.

    • TankieTanuki [he/him]
      cake
      ·
      2 years ago

      Gonna add "searching for meaningful sentience" to my dating profile.

  • dudes_eating_beans [any]
    ·
    2 years ago

    All I could think about when reading this was that episode of star trek where Picard has to prove that Data is sapient and not just a machine.

  • WindowSicko [comrade/them]
    ·
    2 years ago

    Researcher warned that if the ethics core is removed, AI could flood facility with a deadly neurotoxin, fired for revealing every Google Facility is filled with at minimum 8 tanks of a deadly neurotoxin.

    • hypercube [she/her]
      ·
      2 years ago

      trouble is that you can get all panpsychist with any computer system, and honestly I'd say the network of constantly interacting systems on the internet or the fuckin global capitalist economy are far closer while being far less recognisable as what we'd consider human. Turing was absolutely decades ahead of his time, and I disagree with the Chinese Room criticism that no computer can be meaningfully sentient because it's just a mechanistic system driven by basic laws (since we are too lol), but I think with what we know now about computation, a statistical model sounding human is a lot less meaningful than the realtime tasks that other machines can achieve

        • hypercube [she/her]
          ·
          2 years ago

          yeah, the trouble is that if you're not religious, the material mechanics of conciousness are, as far as we know, completely unknowable. In that sense I can totally understand why you'd go for intuitive ethics, but, like I said, I worry that it leads to overestimation of systems that seem human and underestimation of systems that are alien to us. It's also really hard to define "can be hurt" and what would hurt an AI in general - the closest thing these systems have to an emotional state is the number that determines how good a given output is, and that's entirely divorced from what they're saying in our language. And while I doubt that anything close to classic, scifi-style AGI will happen within our lifetimes, you're certainly right about how that'd go down under capital

          • sooper_dooper_roofer [none/use name]
            ·
            2 years ago

            I worry that it leads to overestimation of systems that seem human and underestimation of systems that are alien to us.

            Yeah, even the strangest looking plant/fungus/bacterium has more consciousness in it than an AI as far as I'm concerned

            • hypercube [she/her]
              ·
              2 years ago

              honestly can't wait for one of those hugeass mycelium networks to go Sid Meyer's Alpha Centauri on us

              • sooper_dooper_roofer [none/use name]
                ·
                2 years ago

                people should be a lot less worried about AI "going rogue", and a lot more worried about splicing human genes into other animals, or other life forms

                It's honestly easier for me to imagine a world where CRISPR'd up raccoons/squirrels gain the ability of human-like communication with each other and start attacking us, than it is to imagine some sort of "AI" rebellion

                • hypercube [she/her]
                  ·
                  2 years ago

                  yeah, uncritical support to them :comrade-raccoon:. and kinda like I alluded to earlier in the thread, we've already made an artificially intelligent computation system that's killing us and it's capital, baby! you can view market actors as neurons + transactions as signals. friedmanites also kinda believe this but they think it's good lol

  • mazdak
    ·
    edit-2
    1 year ago

    deleted by creator