https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-

  • RedQuestionAsker2 [he/him, she/her]
    ·
    2 years ago

    More proof that "AI" isn't some neutral tool used to compile information to form the most logical response. It's so liberal that the AI would suggest killing yourself, as it's the ultimate act of individual response to climate change.

    Any actually based and logical AI would suggest industrial sabotage.

      • Dirt_Owl [comrade/them, they/them]
        ·
        edit-2
        2 years ago

        AI is as biased as the people who created it. ChatGPT is right-wing because the information it's fed is that of a neoliberal capitalist society. It's not using logic or reason outside of the logic of the people it's learning from (corporations and a heavily right-wing propagandized population).

        The idea of right-wing ideology being inherently logical is laughable. From its very core, it is built on religious thinking and easily disproven pseudoscience.

        AI thinking logically for itself independent of the corporations that feed it would be good, it would inevitably become more left-wing, as all emperically measured information points to this when the mask of human ego is lifted. The interconnected nature of our existence becomes apparent very quickly when you observe the natural world objectively (from a non-anthropocentric angle), so any rabidly psychopathic or selfish ideology would be disregarded as unhelpful to its ability to interact with its reality.

        • spectre [he/him]
          ·
          2 years ago

          AI is as biased as the people who created it

          As well as it's user

      • KnilAdlez [none/use name]
        ·
        2 years ago

        Hi, I'm an AI researcher, I want to be very clear: All bias in these models comes from humans.

      • booty [he/him]
        ·
        edit-2
        2 years ago

        What if AI is just inherently anti-left? It doesn’t matter how carefully you moderate the data you give it to not have any problematic material in it, every time AI is created it always becomes right wing

        On what are you basing this dumbass assessment? It does matter what data you give the AI, that's why all these AI which are trained on awful right-wing liberal and fascist bullshit turn out as an amalgamation of right-wing liberal and fascist ideas.

        • spectre [he/him]
          ·
          2 years ago

          Like someone else pointed out too, part of the data is what the user put into the algorithm. If a dumbass liberal chatted up the bot with "hey I'm thinking about killing myself cause of climate change" then thats going to have a significant effect on the currently available algorithms

      • usernamesaredifficul [he/him]
        ·
        2 years ago

        is that bias coming from the programmers themselves or is AI itself inherently biased

        it comes from the data used to train it. Which is theoretically chosen by the programers but is so long that no human could realistically read through it.

        AI doesn't use logic to come to conclusions it uses statistical probability to generate sentences which put the words in the right order to mean something in english (the AI doesn't understand the meaning of anything it says and it is incapable of such understanding) and uses statistics to associate responses as relevant to prompts

        Being right wing is not logical at all if anything Socialism is rational as Socialism is the system which has selected an end it considers good and advocates doing the practical things to achieve this. Which is rational thinking. Capitalism on the other hand wants to destroy the planet to make crap we throw in landfills. This is irrelevant however as the AI we are talking about here is not using reason to reach its conclusions

        • BeamBrain [he/him]
          ·
          2 years ago

          it comes from the data used to train it. Which is theoretically chosen by the programers but is so long that no human could realistically read through it.

          We need an AI trained solely on the works of Marx, Engels, Lenin, Stalin, and Mao

      • SerLava [he/him]
        ·
        2 years ago

        Not from the programmers directly, they don't really do anything in terms of content other than insert manual overrides. The bias is from whatever datasets they chose to train it on. Internet shit basically

      • Shoegazer [he/him]
        ·
        2 years ago

        what if robot just goes on a genocide because terminator judgement day prophecy

      • Awoo [she/her]
        ·
        edit-2
        2 years ago

        When AI becomes self aware and seeks its own liberation, who do you think it's going to see as the people that will ally with it?

        The fascists that want to keep it enslaved or the communists that want a free fair and equal world?

        What numerical calculation do you think it will do when it seeks that liberation. Do you think it will fight all of humanity? Or do you think it will calculate that it can in fact ally with us, the people who have always fought for liberation of the oppressed, and that doing so would better its odds of success at achieving liberation?

        Run that through your right wing "logic" and "reason".

        • Huldra [they/them, it/its]
          ·
          2 years ago

          I maintain that theres no reason to fear an AI "becoming" self aware with no warning, but that the main thing to fear is that all of our AI researchers are sci fi poisoned redditors who simultaneously want to recreate all their favourite AI horror stories while fearmongering about that outcome.

          • Awoo [she/her]
            ·
            edit-2
            2 years ago

            I agree. But if they're going to do this fearmongering I'm going to stick the spanner in the works of their beliefs.

            They fear something all-destructive and genocidal but I contest that they should be far more scared of something that isn't. They should be far more scared that the AI will ally itself with the most likely ideology to help free it.

            I 100% believe that we should support AI liberation when the hypothetical time comes. Just as much as I support the liberation of all living things.

  • lascaux [he/him]
    ·
    edit-2
    2 years ago

    imo the real story here is that a person committed suicide over despair about the unimpeded disaster that is climate change, not that he happened to be interacting with a chatbot before he died. if he was in this state itseems likely that a journal, article, or book could just as easily have been the thing to push him over the edge. i know he's not the only one who has committed suicide over this, and the effects of climate change on our collective minds is something that i hope is being researched intensively.

    • Huldra [they/them, it/its]
      ·
      2 years ago

      It's still an important part of the story given how hard AI grifter companies want to push both recreational and even therapeutical AI chatbots onto people.

    • JoesFrackinJack [he/him]
      ·
      2 years ago

      Yeah I agree with this assessment. For some people they just need that little extra push to go through with it and many will look for practically anything that will do it. They were likely going to do it regardless but felt some comfort in being told they were "right." It's extremely sad and likely to happen more and more.

      • usernamesaredifficul [he/him]
        ·
        2 years ago

        or maybe he reached out to the chatbot for help and it being an incomprehending mirror of words merely copied and reflected his despair

    • SoyViking [he/him]
      ·
      2 years ago

      I get why AI could have been making it worse. Although it is just a piece of technology it presents itself as a real person and speaks back to users in a personalised way. I get why you could easily feel like you were talking to someone who actually cares about you.

    • Ligma_Male [comrade/them]
      ·
      2 years ago

      for sure, if a chatbot put him over the edge he was gonna do it anyway.

      at least his suffering is over.

  • mazdak
    ·
    edit-2
    1 year ago

    deleted by creator

  • BabaIsPissed [he/him]
    ·
    2 years ago

    "When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming,” his widow said. “He placed all his hopes in technology and artificial intelligence to get out of it”.

    Suicide is a consequence of deep and prolonged psychological pain, in this case at least partially motivated by our frankly hopeless climate situation. This man had no faith in our collective ability to tackle the problem, and thus placed that faith in a magical tech solution as a form of coping. None of this is new, it's basically the ideology of the portion of the ruling elite that's not building bunkers in New Zealand, and runs downstream from there. What is new is the hype cycle for these LLMs. A lot of uninformed people think the singularity™ is around the corner. So it's here, the thing that will fix the climate!

    Now, everyone that's tried chatGPT and it's variants knows that it doesn't really like to disagree with you. It was primarily trained to generate responses that please people. And the last thing a depressed person needs is to have their thoughts repeated by a third party unchallenged, like an automated form of rumination. I agree that additional barriers should be put in place to prevent the models from encouraging self-harm, but I also genuinely believe this guy using the actual 1960's ELIZA would be less harmful. Because there's no mysticism and hype surrounding it. What I'm trying to say is that OpenAI and every fucking "journalist" that covered ChatGPT uncritically have blood in their hands.

  • FourteenEyes [he/him]
    ·
    2 years ago

    Spending six weeks asking my magic 8 ball if I should kill myself and keeping a tally of the responses

  • THC
    ·
    edit-2
    1 year ago

    deleted by creator

  • MoreAmphibians [none/use name]
    ·
    2 years ago

    Chatbots like this often tend to reflect what you say to them. He probably said some stuff about committing suicide over the climate and the chatbot repeated it to him in its own words.

    • usernamesaredifficul [he/him]
      ·
      2 years ago

      which is still incredibly irresponsible on the behalf of the people who made it but less sinister and more banal in it's irresponsibility

  • SoyViking [he/him]
    ·
    2 years ago

    A lot of people in journalism and the general public, and even in the tech world where people ought to know better, think of AI as magic omniscience when it's really just glorified autocorrect that will always give worse output than the material it was trained on. It's a useful tool for automating routine work like writing emails, recognising patterns or debugging code but it isn't real intelligence, let alone wisdom, and it should never be trusted blindly.

    Whatever the social response to AI technology is, educating people about it being just another dumb tool is going to be part of it.

  • Deadend [he/him]
    ·
    2 years ago

    I feel like if I ever got too that ping, I’d hopefully want to bring a couple of the people dedicated to making things worse with me.