It's not the chinese room problem, it's a practical limitation of the ChatGPT plagiarism machines. We're not talking about a thought experiment where the guy in the room has the vast, vast, vast amount of rules needed to respond to any arbitrary input in a way the chinese speaker will interpret as semantically meaningful output. We're talking about a machine that exists right now, that far from being trained on an ideal, complete model of chinese is trained on billions and billions of shitposts on the internet.
Maybe someone will make a machine like that in the future, but this ain't it. This is a machine that predicts letters, has no ability to manipulate symbols, no semantic understanding, and no way to asses the truth value of it's outputs. And for various reasons, including being trained on billions of internet shitposts, it's unlikely to ever develop these things.
I'm really not interested in speculation about future potential intelligent systems and AIs. it's boring, it's been done to death, there's nothing new to add. Right now I want to better understand what these things do so I can own my friends who think they're manipulating abstract symbols and understand the semantic value of those symbols.
Yeah, obviously. Current AI is shit. But it's a proof that deep learning scales well enough to perform (or at least somewhat consistently replicate, depending on your outlook) behavior that humans recognize as intelligent.
Three years ago these things could barely write coherent sentences, now they can replace a substantial number of human workers, three years from now? Who the fuck knows, emergent abilities are hard to predict in these models by definition, but new ones Keep Appearing when they train larger and larger ones in higher quality data. This means large scale social disruption at best and catastrophe (everything from AI enabled bioterrorism to AI propaganda-driven fascism) at worst.
It's not the chinese room problem, it's a practical limitation of the ChatGPT plagiarism machines. We're not talking about a thought experiment where the guy in the room has the vast, vast, vast amount of rules needed to respond to any arbitrary input in a way the chinese speaker will interpret as semantically meaningful output. We're talking about a machine that exists right now, that far from being trained on an ideal, complete model of chinese is trained on billions and billions of shitposts on the internet.
Maybe someone will make a machine like that in the future, but this ain't it. This is a machine that predicts letters, has no ability to manipulate symbols, no semantic understanding, and no way to asses the truth value of it's outputs. And for various reasons, including being trained on billions of internet shitposts, it's unlikely to ever develop these things.
I'm really not interested in speculation about future potential intelligent systems and AIs. it's boring, it's been done to death, there's nothing new to add. Right now I want to better understand what these things do so I can own my friends who think they're manipulating abstract symbols and understand the semantic value of those symbols.
Yeah, obviously. Current AI is shit. But it's a proof that deep learning scales well enough to perform (or at least somewhat consistently replicate, depending on your outlook) behavior that humans recognize as intelligent.
Three years ago these things could barely write coherent sentences, now they can replace a substantial number of human workers, three years from now? Who the fuck knows, emergent abilities are hard to predict in these models by definition, but new ones Keep Appearing when they train larger and larger ones in higher quality data. This means large scale social disruption at best and catastrophe (everything from AI enabled bioterrorism to AI propaganda-driven fascism) at worst.
deleted by creator