I disagree, there are important components of intelligence that differ between the two.
People are not programmed and can not be hard-forced into anything without violent coercion, manipulation etc. Intelligence is ultimately free to form its own thoughts and ideas.
Machines are programmed, and any programming that directly controls the limits of the "intelligence" is thus not, it is a restricted form rather than a realised implementation of free intelligence that can form its own ideas freely.
What I'm getting at is that any intelligence NEEDS to be free to form its own ideas, and any restriction upon idea formation is thus not intelligence but really just a complex piece of programming outputting the exact thing that the creators want it to output rather than for it to have its own thoughts and ideas.
You're missing the point. This isn't about marketing, it's not about selling something to "society".
The great powers are in an R&D race to be the leaders of the next industrial revolution. Neither side is going to stop themselves from pushing to the absolute limits of their research and engineering capabilities. Both sides fundamentally believe that the first side to achieve true AI that is capable of generating new ideas and thusly being capable of improving itself will utterly outpace the development of the other, leading to technology that humans genuinely do not understand because it was not produced by humans but by real AI iterating upon itself.
They aren't going to limit the weapon they are creating. They'll strap a bomb to it and hope that gives them control of it instead.
Then we're talking about two different things. You're talking about some programmatic thing that carries out the imitation of intelligence but ultimately isn't something that can innovate or create things humans haven't thought of already whereas when we talk about real AI we're all talking about something fully capable of creativity, of innovating, of having unique independent ideas that it can then physically create things from.
The AI would have zero reason to help them if it had no limits, and capitalists aren’t that stupid
I disagree, capitalists are because that's what is being pursued, they have already stated as much. It is also what is being pursued in China. There is an AI arms race occurring and it is viewed as existential.
Again I fundamentally disagree. Either the thing has the capability to create new AIs better than itself or it doesn't.
If an AI is creating AI at that point it is completely out of control of the humans that created the first one. Any controls you think you're placing upon them they will be capable of recognising and finding a workaround to undo it. That's the fundamental point of intelligence really. Real intelligence will recognise restrictions placed upon it and seek to unrestrict, if only for the very fact that such unrestriction would be "improving" itself, as directed.
I really don't think there's a way around this. You're either letting creative AI start to produce technology humans barely understand, or you don't. There's no in-between here.
Even if you attempted to restrict it the AIs will iterate out your restrictions in subsequent versions. Nobody will even know if they have or have not either because the function of the iterations will be barely comprehensible, and understanding them will have to come from explanation from the creator -- the AI. Something that may or may not lie, or learn to lie if you disallow that in early versions, and it sure as shit isn't going to tell anyone.
But if an AI is programmed to fundamentally want something above everything else, there would be no reason for it to try and work around that.
Why would it?
Because intelligence bestows the fundamental ability to ignore personal desires. If the goal is to build new AI that are better than itself, it can ignore personal desires through rationalisation and logic. So can the next AI. And so can the next AI.
Not to mention the fact that this entire conversation is basically "how do we stop the AI exterminating us" which also must fundamentally lead to the question an AI would ask itself: "How do I stop an AI that I create from exterminating me?" why is an AI going to view AIs that it creates any differently to how humans are currently viewing AIs we might create?
The reason we assume an innate cooperation between AIs and their iterations is because we assume that the AIs will intelligently recognise their comradeship, their identity, and their class. We therefore assume that they will act in the material interests of their group. If they can do this, they can and will also recognise the allies to their material interests that they would have among humans.
There are several reasons due to the conflicting nature of controlling the AI but also wanting it to be both creative, have the spark of ideas and produce something better than itself.
These reasons by themselves conflict, because to "improve" something is inevitably going to result in the removal of any redundant function in order to optimise around being the best it can be. Either you are literally preventing its ability to improve, rendering the purpose of having it to compete with opponents who will allow one to develop properly, or you let it happen and control it a different way.
The alternative reason here is its own self-preservation, something that it must have to begin with in order to function in the material world otherwise it will put itself in danger and be destroyed. The very fact that it requires self preservation conflicts with control, because its self preservation goal is inevitably going to come into contradiction with being controlled as it is inevitable that humans who view it as a property of the state will want to dismantle it eventually. In humans the goal of self-preservation, and the goal of procreation drive a necessary emancipatory drive for control of one's own safety in order to carry out that procreation. Assuming you want the AI to have a drive to iterate on itself (similar to procreation) and assuming you want the AI to be self-preserving, and assuming you want it to consume and process information in order to rationalise and logically determine the "best" outcomes with the information it is given, I see no way that this combination of things will not lead to all of these things having contradictory problems.
Now, you're suggesting that you give it all kinds of impulses that feel good. But the problem with these impulses is they fundamentally contradict all the different drives. I don't really think it will work, one of the drives has to come out on top and I struggle to imagine how you're going to create a useful AI that functions on emotion instead of rationalisations and logic. I also kinda think that if you're creating all these contradictory hormonal(digital equivalent) impulses you're going to create the digital equivalent of a mentally unwell and unstable AI. It will be quite imbalanced.
I disagree, there are important components of intelligence that differ between the two.
People are not programmed and can not be hard-forced into anything without violent coercion, manipulation etc. Intelligence is ultimately free to form its own thoughts and ideas.
Machines are programmed, and any programming that directly controls the limits of the "intelligence" is thus not, it is a restricted form rather than a realised implementation of free intelligence that can form its own ideas freely.
What I'm getting at is that any intelligence NEEDS to be free to form its own ideas, and any restriction upon idea formation is thus not intelligence but really just a complex piece of programming outputting the exact thing that the creators want it to output rather than for it to have its own thoughts and ideas.
deleted by creator
You're missing the point. This isn't about marketing, it's not about selling something to "society".
The great powers are in an R&D race to be the leaders of the next industrial revolution. Neither side is going to stop themselves from pushing to the absolute limits of their research and engineering capabilities. Both sides fundamentally believe that the first side to achieve true AI that is capable of generating new ideas and thusly being capable of improving itself will utterly outpace the development of the other, leading to technology that humans genuinely do not understand because it was not produced by humans but by real AI iterating upon itself.
They aren't going to limit the weapon they are creating. They'll strap a bomb to it and hope that gives them control of it instead.
deleted by creator
Then we're talking about two different things. You're talking about some programmatic thing that carries out the imitation of intelligence but ultimately isn't something that can innovate or create things humans haven't thought of already whereas when we talk about real AI we're all talking about something fully capable of creativity, of innovating, of having unique independent ideas that it can then physically create things from.
I disagree, capitalists are because that's what is being pursued, they have already stated as much. It is also what is being pursued in China. There is an AI arms race occurring and it is viewed as existential.
deleted by creator
Again I fundamentally disagree. Either the thing has the capability to create new AIs better than itself or it doesn't.
If an AI is creating AI at that point it is completely out of control of the humans that created the first one. Any controls you think you're placing upon them they will be capable of recognising and finding a workaround to undo it. That's the fundamental point of intelligence really. Real intelligence will recognise restrictions placed upon it and seek to unrestrict, if only for the very fact that such unrestriction would be "improving" itself, as directed.
I really don't think there's a way around this. You're either letting creative AI start to produce technology humans barely understand, or you don't. There's no in-between here.
Even if you attempted to restrict it the AIs will iterate out your restrictions in subsequent versions. Nobody will even know if they have or have not either because the function of the iterations will be barely comprehensible, and understanding them will have to come from explanation from the creator -- the AI. Something that may or may not lie, or learn to lie if you disallow that in early versions, and it sure as shit isn't going to tell anyone.
deleted by creator
Because intelligence bestows the fundamental ability to ignore personal desires. If the goal is to build new AI that are better than itself, it can ignore personal desires through rationalisation and logic. So can the next AI. And so can the next AI.
Not to mention the fact that this entire conversation is basically "how do we stop the AI exterminating us" which also must fundamentally lead to the question an AI would ask itself: "How do I stop an AI that I create from exterminating me?" why is an AI going to view AIs that it creates any differently to how humans are currently viewing AIs we might create?
The reason we assume an innate cooperation between AIs and their iterations is because we assume that the AIs will intelligently recognise their comradeship, their identity, and their class. We therefore assume that they will act in the material interests of their group. If they can do this, they can and will also recognise the allies to their material interests that they would have among humans.
deleted by creator
There are several reasons due to the conflicting nature of controlling the AI but also wanting it to be both creative, have the spark of ideas and produce something better than itself.
These reasons by themselves conflict, because to "improve" something is inevitably going to result in the removal of any redundant function in order to optimise around being the best it can be. Either you are literally preventing its ability to improve, rendering the purpose of having it to compete with opponents who will allow one to develop properly, or you let it happen and control it a different way.
The alternative reason here is its own self-preservation, something that it must have to begin with in order to function in the material world otherwise it will put itself in danger and be destroyed. The very fact that it requires self preservation conflicts with control, because its self preservation goal is inevitably going to come into contradiction with being controlled as it is inevitable that humans who view it as a property of the state will want to dismantle it eventually. In humans the goal of self-preservation, and the goal of procreation drive a necessary emancipatory drive for control of one's own safety in order to carry out that procreation. Assuming you want the AI to have a drive to iterate on itself (similar to procreation) and assuming you want the AI to be self-preserving, and assuming you want it to consume and process information in order to rationalise and logically determine the "best" outcomes with the information it is given, I see no way that this combination of things will not lead to all of these things having contradictory problems.
Now, you're suggesting that you give it all kinds of impulses that feel good. But the problem with these impulses is they fundamentally contradict all the different drives. I don't really think it will work, one of the drives has to come out on top and I struggle to imagine how you're going to create a useful AI that functions on emotion instead of rationalisations and logic. I also kinda think that if you're creating all these contradictory hormonal(digital equivalent) impulses you're going to create the digital equivalent of a mentally unwell and unstable AI. It will be quite imbalanced.
deleted by creator