Forewarning: I don't really know shit about AI and what constitutes sentience, humanity, etc., so I probably won't be able to add much to the conversation, but I do want y'all's thoughts. Also sorry for the lengthy post. TL;DR at the bottom

For framing, this came from talking about how the Institute in Fallout 4 regards and treats synths.

So someone in a discord server I'm in is adamantly against giving rights to robots. No matter how sentient they are. This comes from the basis that they would have to have been programmed by humans (which have their own biases to input), they will not be able to have true empathy or emotions (saying AI is by default emotionless), and it is irresponsible to let AI get to the point of sentience. Part of their objections were also about imposing humanity on something that could mechanically fail because of how we designed them (their quote was "something that rain could short circuit”) would be cruel.

Now I do agree with them on these points if we are talking about AI as it is right now. I don't believe we currently have the technology to create AI that would be able to be considered sentient like a human. I do deliberately say like a human at this point, but I would feel the same way if an AI had animal-like sentience I guess. I did ask if they would give an ape rights if they were able to more adequately communicate with us and express a desire for those rights, and they said no. We weren't able to discuss that as they had to head off to sleep, so I can't fully comment on that, but I would like that hypothetical to be considered and discussed in regards to robot sentience and rights. We briefly talked about whether AI could consent, but not too much to really flesh out or give arguments for or against. My example was that if I told my AI toaster that I was going to shut it down for an update, and it asked me not to, I would probably have to step back and take a shot of vodka. If we had a situation like the Matrix or Fallout synths, I would not be able to deny them their humanity. If we had AI advanced enough that could become sentient and act and think, on their own, I would not be able to deny them rights if they asked for them. Now there are situations where it would be muddy for me, like if we knew how much their creators still had a hand in their programming and behaviors or such. But if their creators, or especially world governments, are officially stating that certain AIs are acting seemingly of their own volition and against the programming and wills of their creators, I am getting ready to arm the synths (this is also taking into account whether or not the officials might be lying to us about said insubordination, psyops, etc. etc.).

TL;DR, what are y’all’s thoughts on AI sentience and giving AI rights?

  • KobaCumTribute [she/her]
    ·
    edit-2
    4 years ago

    I've thought a lot about the issue of sapient or near-sapient AI and their uses, mostly in the context of trying to figure out the line where their use becomes unethical. By that I mean you wind up at a point where you've got something that's at least as sapient as, say, a dog, just with better language processing and an intelligence geared towards advanced tasks instead of operating the body of a small animal and keeping it alive. At what point does the use case for even this borderline sapient AI become unethical? Is it ok to keep cold copies of it that can be spun up and turned off at will, being endlessly duplicated and destroyed as needed? If its necessary feedback mechanisms approximate pain (as in a signal saying "this is bad, this is destructive, you must back off and not let this happen"), is it ethical to place it in situations where that inevitably gets sent to it, perhaps inescapably so (meaning it's experiencing simulated "pain" indefinitely, until its "body" whether real or digital is destroyed or its instance is deleted)?

    For yet more sophisticated AIs built specifically around performing a specific task, how does one separate out their work and their existence? If something's whole nature of being is wrapped up in, say, creating and directing a virtual world for the entertainment of human minds, so that its reward mechanisms all revolve around the act of creating, of puppeteering virtual beings and making them look real, of crafting and telling a story, what does it do when it's not working? Do we just consider its active labor as its own recreation, or a side effect of how its existence works? Do we make it take breaks and let it do whatever it wants, which is probably going to be just making more stories? Do we pay it by having people enjoy its own original creations and telling it that it did a good job and should be proud? It's such a fucking absurd can of worms that you open when you try to imagine what a functionally alien intelligence would want and need, and I just can't even begin to imagine an answer to it.

    How does the labor of something created to enjoy doing that labor work in the context of a communist society? If something wants to labor endlessly without breaks, do we acknowledge that as being like how, for example, humans want to continue breathing and performing functions necessary to life, or do we look at it the way we would a passionate artisan who sits and works 70+ hour weeks on one obsessive passion project or another? Do we make it take breaks, or would that be akin to making a person lie in a dark box to stop them from working overtime? Does the hypothetical AI get paid somehow, or does it simply receive all that it needs as a matter of fact? Does it even want more things? What could it want? Do we need to conceive of luxury goods to reward the AI and encourage labor, even when we guarantee its existence whether it works or not?

    And no, I don't have an answer to any of this. I don't even know if these are the right questions to ask.