0karin728 [any]

  • 0 Posts
  • 33 Comments
Joined 4 years ago
cake
Cake day: November 4th, 2020

help-circle



  • Yeah, fair. I would expect fusion to be substantially miniaturized from current designs by the time anyone is thinking about going to enceladus, since that's probably like a century or two off, but I can definitely imagine a world where that isn't the case.

    Thermoelectric generators aren't actually fission reactors, but they're crazy cheap and ideal for space travel because they require almost 0 upkeep


  • This is why I fucking hate singularity cultist techbros. They convince the entire rest of society that AI is fake or that true AI is impossible or whatever by basically starting a religious cult around it.

    This is harmful because AI is Incredibly dangerous and we need people to acknowledge that to start taking action to ensure that it's developed safely and don't suddenly have capabilities spike by 300% one month and now suddenly we have 30% unemployment, or a super-plague gets released because chatGPT 5 in 2026 told some idiot how to make flu viruses 10x more transmissible and 10x as deadly or whatever.



  • Computational universality has nothing to do with digital computer flipping bits. It just means that any system which manipulates information (performs computation), and can do so at a certain level of complexity (there's lots of equivalent ways of formulating it but the simplest is that it can do integer arithmetic) are exactly equivalent, in that they can all do the same set of computations.

    It's pretty obvious that the human brain is at least Turing complete, since we can do integer arithmetic. It's also impossible for any computational system to be "more" than Turing complete (whatever that would even mean) since every single algorithm that can be computed in finite time can be expressed in terms of integer arithmetic, which means that a Turing machine could perform it.

    Obviously the human brain is many, many, many layers of abstraction and us FAR more complicated than modern computers. Plus neurons aren't literally performing a bunch of addition and subtraction operations on data, the point is that whatever they are doing logically must be equivalent to some incomprehensibly vast set of simple arithmetic operations that could be performed by a Turing machine, because if the human brain can do a single thing that a general Turing machine can't, then it would either take infinite time or require infinite resources to do so.


  • Yeah, obviously. Current AI is shit. But it's a proof that deep learning scales well enough to perform (or at least somewhat consistently replicate, depending on your outlook) behavior that humans recognize as intelligent.

    Three years ago these things could barely write coherent sentences, now they can replace a substantial number of human workers, three years from now? Who the fuck knows, emergent abilities are hard to predict in these models by definition, but new ones Keep Appearing when they train larger and larger ones in higher quality data. This means large scale social disruption at best and catastrophe (everything from AI enabled bioterrorism to AI propaganda-driven fascism) at worst.



  • They're starting a dangerous arms race where they release increasingly dangerous and poorly tested AI into the public, while dramatically overselling their safety. Pointing out that this technology is dangerous is the exact opposite of what they want.

    You're playing into their grift by acting like the entire idea of AI is some bullshit techbro hype cycle, which is exactly what microsoft, openai, Facebook, etc want. The more people pay attention and think "hey maybe we shouldn't be integrating enormous black box neural networks deep in all of our infrastructure and replacing key human workers with them", the more difficult it will be for them to continue doing this.



  • This is just the whole Chinese room argument, it confuses consciousness for intelligence. Like, you're completely correct, but the capabilities of these things scale with compute used during training, with no sign of diminishing returns any time soon.

    It could understand Nothing and still outsmart you because it's good at predicting the next token that corresponds with behavior that would achieve the goals of the system. All without having any internal human-style conscious experience. In the short term this means that essentially every human being with an internet connection now suddenly has access to a genius level intelligence that never sleeps and does whatever it's told, which has both good and bad implications. In long term, they could (and likely will) become far more intelligent than humans with, which will make them increasingly difficult to control.

    It doesn't matter if the monkey understands what it's doing if gets so good at "randomly" hitting the typewriter that businesses hire the monkey instead of you, and then as the monkey becomes better and better starts handing out instructions to produce chemical weapons and other bio warfare agents to randos on the street. We need to take this technology seriously if we're going to prevent Microsoft, OpenAI, Facebook, Google, etc. from accidentally Ending the World with it, or deliberately making the world Worse with it.


  • LLMs definitely are not the Magic that a lot of idiot techbros think they are, but it's a mistake to underestimate the technology because it "only generates the next token". The human brain only generates the next set of neural activations given the previous set of neural activations, and look at how far our intelligence got us.

    The capabilities of these things scale with compute used during training, and some of the largest companies on earth are currently in an arms race to throw more and more compute at them. This Will Probably Not End Well. We went from AI barely being able to form a coherent sentence to AI suddenly being a bioterrororism risk in like 2 years because a bunch of chemistry papers were in its training data and now it knows how to synthesize novel chemical warfare agents.

    It doesn't matter whether or not the machine understands what it's doing when it's enabling the proliferation of WMDs, or going rogue to achieve some Incoherent goal it extrapolated from it's training, you're still Dead at the end.




  • I just really fucking wish he wasn't such a TERF asshole. TANS is my go to text for explaining to people how socialism could actually work in the 21st century, but it makes it harder to recommend knowing that if they google the guy they'll find 800 transphobic screeds on his website. I love his work and I think Everyone needs to read TANS, it's just frustrating.




  • 0karin728 [any]tothe_dunk_tank*Permanently Deleted*
    ·
    3 years ago

    Basically, but I think it's even dumber. It's like pascal's wager if humans programmed God first.

    The idea is that and AI will be created and given the directive to "maximize human well-being", whatever the fuck that means without any caveats or elaboration. According to these people such an AI would be so effective at improving human quality of life that the most moral thing anyone could do before it's construction is to do anything in their power to ensure that it is constructed as quickly as possible.

    To incentivise this, the AI tortures everyone who didn't help make it and knew about Roko's Basilisk, since it really only works as motivation to help make the AI if you know about it.

    This is dumb as fuck because no one would ever build an AGI that sophisticated and then only give it a single one sentence command that could easily be interpreted in ways we wouldn't like. Also, even if somehow an AI like that DID manage to exist it makes no sense for it to actually torture anyone because whether it does or not doesn't effect the past and can't get it built any sooner.