Idk if serious but one legitimate concern about AI is the paperclip maximizer. Build a really, really smart system and tell it to build as many paperclips as it can as efficiently as it can. Leave it alone for a while. Return to find out to your horror that the AI has decided that the most efficient way to make the most paperclips is to destroy all humans, turn all the mass of earth in to paperclip building drones, then convert the mass of the entire solar system to paperclips.
Capitalism already is a paperclip maximizer - increase profit at all costs and without consideration for anything else.
And lots of machine learning systems that have baked in racism or biases bc their programmers were twits have shades of it.
the racism one is a serious issue with AI. racism in data means racism in the learned model of the world. It's usually the fault of the data/data collectors when AI learns racism
I would argue any system stupid enough to build an AI paperclip maximiser and let it run wild would be stupid enough to have a team of humans also do it's job and indeed that happens now with capitalism like you pointed out
AI learns from data how to better perform a task according to set metrics. Simply do not train an AI to wipe out humanity easy
Idk if serious but one legitimate concern about AI is the paperclip maximizer. Build a really, really smart system and tell it to build as many paperclips as it can as efficiently as it can. Leave it alone for a while. Return to find out to your horror that the AI has decided that the most efficient way to make the most paperclips is to destroy all humans, turn all the mass of earth in to paperclip building drones, then convert the mass of the entire solar system to paperclips.
Capitalism already is a paperclip maximizer - increase profit at all costs and without consideration for anything else.
And lots of machine learning systems that have baked in racism or biases bc their programmers were twits have shades of it.
the racism one is a serious issue with AI. racism in data means racism in the learned model of the world. It's usually the fault of the data/data collectors when AI learns racism
I would argue any system stupid enough to build an AI paperclip maximiser and let it run wild would be stupid enough to have a team of humans also do it's job and indeed that happens now with capitalism like you pointed out