As a data science undergrad, knowing generally how they work, LLMs are fundamentally not built in a way that could achieve a measure of consciousness.
Large language models are probability-centric models. They essentially look at a graph node of "given my one quintillion sentences and one quadrillion paragraphs on hand, which word is probably next given the current chain of output and the given input." This makes it really good at making something that is voiced coherently. However, this is not reasoning–this is parroting – it's a chain of dice rolls that's weighted to all writing ever to create something that reads like a good output against the words of the input.
The entire idea behind prompt engineering is that these models cannot achieve internal reasoning, and thus you have to trick it into speaking around itself in order to write out the lines of logic that it could reference in its own model.
I do not think AGI or whatever they're calling Star Trek-tier AI will arise out of LLMs and transformer models. I think it is fundamentally folly. I think what I see as fundamental elements of consciousness are just not covered at all by it (such as subjectivity) or are something I just find sorely lacking even despite the advances in development (such as cognition). Call me a cynic, I just truly think it's not going to come out of genAI (as we generally understand the technology behind it for the past couple years) and further research into it.
> told by multiple people that Abstract and Discrete is a notable step-up in difficulty from my previous math classes
> do none of the homework
> set the upper bound of the curve on both exams
am I being trolled, like this isn’t hard at all for me it’s just a bit tedious