• 6 Posts
  • 157 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle
  • I can try to get into more detail on this another time (I need to wind down for sleep) but I guess what I'm trying to get at here is that when you point at basically all states thought of as AES (maybe I missed one?) and call them revisionist in one form or another, it can end up sounding exactly like the "that wasn't real communism" trope or in another way, end up sounding like "that was real communism and see how it sucks and fails actually in practice." I'm trying to word this carefully because it could go in the other direction too if presented thoughtlessly, where it sounds like I'm saying that criticisms of AES projects are bad (criticism is important). The point that I hang on is, making sure we're not de-legitimizing the theory and practice as a whole by being unfairly dismissive of how closely practice aligns with the goal, where it is on the developing path. And also just making sure we are clear on tactics vs. corruption mindset. That to use a rough war analogy, sometimes you have to retreat in order to regroup, but that doesn't mean your army has taken a step backward in its ideological goals. Retreating has the risk of leading to giving up and compromising on what you intend, but the one doesn't automatically follow from the other. So making sure in the weeds of it, we are clear on when something is dangerous compromising on an ideological center and when something is a more complicated tactical development that undoubtedly contains some risk of losing the ideological center, but still has that center and has a specific plan in mind for how to develop past the "retreating" into an advance.

    Edit: wording




  • Why does this person think it's about "big name" worship? I don't care how Newton would feel about AI development. In the same way, I don't care if Marx would agree with how future things developed. His contribution was a scientific one and others have built on what he observed and documented. If he were still here to observe and document, then he could continue to contribute - great! But he's not. So it doesn't really matter. What matters is accurate and effective development of theory and practice. So, if there's an individual who is contributing a lot soundly, it makes sense to give their work attention. If they are no longer, you don't hang on what you imagine they might have thought about future stuff.


  • "Excessive force" is such a strange euphemism. Like it implies that there's some amount of force that is inherently fine as force just so long as it isn't "excessive." But conveniently, who gets to define what is "excessive" is also the entity who has a monopoly on violence. So it's that sort of "we investigated ourselves and found nothing wrong" energy, ya know.

    Aware I'm preaching to the choir, but seems like as good a time as any to re-emphasize the point about how liberals tend to believe their thinking can exist outside any predominant model of society, as neutral, when their thinking comes in some significant part from it. And it can take years of active learning just to become aware of those biases, much less try to unlearn them and replace them with something else.


  • (by smart contract, if I’m not mistaken)

    No investigation, no right to speak and all that.

    You don't sound very confident on the details. I do appreciate the explanation and I am not trying to be snarky or dismissive here. But if you are trying to hold people to a standard of no investigation, no right to speak, I would expect a little more than this for being the one who has done investigation.

    Here is part of the quote:

    You can't solve a problem? Well, get down and investigate the present facts and its past history! When you have investigated the problem thoroughly, you will know how to solve it. Conclusions invariably come after investigation, and not before. Only a blockhead cudgels his brains on his own, or together with a group, to "find solution" or "evolve an idea" without making any investigation. It must be stressed that this cannot possibly lead to any effective solution or any good idea. In other words, he is bound to arrive at a wrong solution and a wrong idea.

    There are not a few comrades doing inspection work, as well as guerrilla leaders and cadres newly in office, who like to make political pronouncements the moment they arrive at a place and who strut about, criticizing this and condemning that when they have only seen the surface of things or minor details. Such purely subjective nonsensical talk is indeed detestable. These people are bound to make a mess of things, lose the confidence of the masses and prove incapable of solving any problem at all.

    The full thing can be found here for discussion: https://www.marxists.org/reference/archive/mao/selected-works/volume-6/mswv6_11.htm

    My takeaway as relevant to this is that it's more about people who hypothesize and invent wildly from nothing and resist going among the masses to learn what they need and how it can be done rather than being about people who are skeptical in the year 2024 in encountering anonymous claims made to them about technology on the internet.

    I've been in situations before of having investigated something quite a bit and facing stubbornness from people who haven't. I can empathize on that level. It's frustrating when you've done the work to learn and people act like their knowledge is equal to yours in spite of having spent little to no time on it at all. But I think there is a line we can cross where it's going to sound like we're saying "turn your brain off and take my word for it" instead of "let's educate the masses so they are better informed."

    In this context, for example, how are we defining what "within reason" is for skepticism? Skepticism is more or less a kind of wariness. I'm having trouble working out where you'd draw the line for reasonable or unreasonable skepticism if we're starting from the premise that the whole reason a person is being skeptical is because they lack the information to confidently draw a conclusion.

    I don't ask a detailed reply here, just consider it as food for thought and if you want to dig into it, you're welcome to of course.





  • Political compass in general seems simultaneously reductionist and not nuanced enough at the same time. If we were going to simplify the current world, I think it'd be more accurate to simply say there is the western empire and its exploitation (and those allied with it) and then there is anybody who opposes that. Within that, you can start getting into nuance like "do they just preach vague stuff about anti-war but cling to the white supremacy dynamic developed over hundreds of years" (e.g. US patsoc, if I'm not mistaken). But even then, if we're understanding imperialism in the right definition, it's like, ok, is somebody opposing imperialism or just opposing, vaguely speaking, some of the international decisions that the US makes. Are they siding with decolonization processes and mindset behind them or are they just wanting to call it quits on the looting and pillaging for now and sit on their hoard.

    I have some concern that overthinking the distinctions just empowers a divide and conquer strategy. That it has its place when we are talking about combined theory and practice in an organized manner making sure your developing movement is not being derailed or taken over by opportunists. But if we're talking about personality test style aesthetics, it only waters things down and draws confusing lines.




  • Sounds similar to some stuff I've been trying to make more conscious and confront, which is to do with the expectations I have of myself and how realistic or healthy they are. A big one for me is social expectations I impose on myself. I tend to have this nebulous image in my head of a smooth, effective socializer that I sort of implicitly believe is what "most people" are and then I get upset with myself when I can't live up to that, or I avoid social situations so that I can't fail to live up to it since I'm not trying.

    But this image is unhealthy, it's unrealistic, and quite honestly, it's not even what most people are. If I actually look at my observations of how others socialize without the lens of assuming they have some special knowledge or skill that I don't, they're kinda all over the place and some of them even make me look more like the smooth image I have by pure contrast of how awkward they are. But ultimately, it's not healthy to view it as a ranking of skill anyway. Because, and this is important, socializing is not a competition.

    Whether most of your problems of comparison and expectations for yourself are socializing or something else, you can apply similar understanding. For example, capitalism tends to get us thinking our competency in the workplace is a ranked system of value. But in practice, it's not even truly a meritocracy. They just preach like it is to get people clawing over each other for personal gain. In practice, it's generally wealth and power passed on from rich families to rich families and anybody beyond that is like a lotto player trying to get ahead.

    You are not weak. You are struggling, as many struggle. Where communists, where the masses find the most strength is in each other, not from a special potential unlocked from within. You can find ways to try to maximize your potential in different contexts, but that's still relative to you and your limits and it's not gonna be a thing that's the same maximum every day, or even every hour. A person who is sick has a much lower maximum than the same person when they are healthy. Same with a person who is burned out vs. not. Having a disability like ADHD changes what your potential looks like vs. being neurotypical, as well as being medicated ADHD vs. not medicated.

    I will reiterate: It's not a competition and unlearning the idea that's been shoved into our heads all our lives that it is, is important. They try to make it into a competition, but it's mostly only an actual competition in the sense of who among the lotto ticket buyers will be the winner. In other words, the forced competition of capitalism is more rigged and random than it is a real ladder that rewards you for being "better."

    You are not your contributions. You are valuable and important beyond that. We have to take that mentality seriously; otherwise, we'd be implying that the most disabled and dependent people aren't important, you know? You can take pride in what you do when you do it, but if you view your value as hinging on how competent of a revolutionary you are, you're still spinning on individualist, capitalist thinking. Don't let capitalism devalue human life. Sometimes it can help put it in perspective to look at how you view others vs. how you view yourself. For example, if you would oppose it devaluing the life of a Palestinian in Gaza, why would you be okay with it devaluing your own life?




  • That's an interesting take on it and I think sort of highlights part of where I take issue. Since it has no world model (at least, not one that researchers can yet discern substantively, anyway) and has no adaptive capability (without purposeful fine-tuning of its output from Machine Learning engineers), it is sort of a closed system. And within that, is locked into its limitations and biases, which are derived from the material it was trained on and the humans who consciously fine-tuned it toward one "factual" view of the world or another. Human beings work on probability in a way too, but we also learn continuously and are able to do an exchange between external and internal, us and environment, us and other human beings, and in doing so, adapt to our surroundings. Perhaps more importantly in some contexts, we're able to build on what came before (where science, in spite of its institutional flaws at times, has such strength of knowledge).

    So far, LLMs operate sort of like a human whose short-term memory is failing to integrate things into long-term, except it's just by design. Which presents a problem for getting it to be useful beyond specific points in time of cultural or historical relevance and utility. As an example to try to illustrate what I mean, suppose we're back in time to when it was commonly thought the earth is flat and we construct an LLM with a world model based on that. Then the consensus changes. Now we have to either train a whole new LLM (and LLM training is expensive and takes time, at least so far) or somehow go in and change its biases. Otherwise, the LLM just sits there in its static world model, continually reinforcing the status quo belief for people.

    OTOH, supposing we could get it to a point where an LLM can learn continuously, now it has all the stuff being thrown at it to contend with and the biases contained within. Then you can run into the Tay problem, where it may learn all kinds of stuff you didn't intend: https://en.wikipedia.org/wiki/Tay_(chatbot)

    So I think there are a couple important angles to this, one is the purely technical endeavor of seeing how far we can push the capability of AI (which I am not opposed to inherently, I've been following and using generative AI for over a year now during it becoming more of a big thing). And then there is the culture/power/utility angle where we're talking about what kind of impact it has on society and what kind of impact we think it should have and so on. And the 2nd one is where things get hairy for me fast, especially since I live in the US and can easily imagine such a powerful mode of influence being used to further manipulate people. Or on the "incompetence" side of malice and incompetence, poorly regulated businesses simply being irresponsible with the technology. Like Google's recent stuff with AI search result summaries giving hallucinations. Or like what happened with the Replika chatbot service in early 2023, where they filtered it heavily out of nowhere claiming it was for people's "safety" and in so doing, caused mental health damage to people who were relying on it for emotional support; and mind you, in this case, the service had actively designed it and advertised it as being for that, so it wasn't like people were using it in an unexpected way from that standpoint. The company was just two-faced and thoughtless throughout the whole affair.


  • It never ceases to amaze me the amount of effort being put into shoehorning a probability machine into being a deterministic fact-lookup assistant. The word "reliable" seems like a bit of a misnomer here. I guess only in the sense of reliable meaning "yielding the same or compatible results in different clinical experiments or statistical trials." But certainly not reliable in the sense of "fit or worthy to be relied on; worthy of reliance; to be depended on; trustworthy."

    Since that notion of reliability has to do with "facts" determined by human beings and implanted in the model as learned "knowledge" via its training data. There's just so much wrong with pushing LLMs as a means of accurate information. One of the problems being that supposing they got an LLM to, say, reflect the accuracy of wikipedia or something 99% of the time. Even setting aside how shaky wikipedia would be on some matters, it's still a blackbox AI that you can't check the sources on. You are supposed to just take it at its word. So sure, okay, you tune the thing to give the "correct" answer more consistently, but the person using it doesn't know that and has no way to verify that you have done so, without checking outside sources, which defeats the whole point of using it to get factual information...! 😑

    Sorry, I think this is turning into a rant. It frustrates me that they keep trying to shoehorn LLMs into being fact machines.


  • Too many people believe western media uncritically when it comes to international stuff. The contradictory part is they'll sometimes have skepticism, distrust, or even hatred for one or more major news sources that's focused on their own country's affairs. But when it comes to news about other countries, the same skepticism can be missing.

    Before I learned about ML and all that, I was in that place to some extent, I think. But now that I have some idea of what to look for and know a bit more about international affairs and history, it's really obvious how western media narratives about "human rights" are just narratives of convenience. The formula goes something like: "Is X country somewhere we want to prop up against Y country? If yes, X country is a bastion of human rights and Y country eats babies. Does X country actively oppose us? If yes, X country eats turbo evil cereal as mandatory breakfast meals in every citizen's state-mandated bowl."

    It's very cartoonish. And I mean I'm not even exaggerating to say it's cartoonish. I think of this video, which was from decades ago, yet is still so on point for the style of propaganda: https://www.youtube.com/watch?v=NK1tfkESPVY

    But one thing I'm not sure how to contend with is when people are deep in paranoia about "foreign agents" and "foreign propaganda" kind of thing. I recall one time online trying to show someone that video to make a point about western propaganda and they straight up refused to watch it. IIRC, they were also someone who had come into the convo thinking I was a Chinese shill or something, but weren't open about thinking that right away, so I naively attempted some good faith stuff at first.

    The kind of thinking where anything that contradicts the existing narrative must be coming from "the enemy" "in secret" is such a disturbing thing. I think, would hope, most of us here don't fall into that trap of thinking. For example, even something as straightforward as anti-imperialism is not binary good/evil; there can be countries run by factions that are not empowering the working class, the marginalized among their people as a system of power, but are nevertheless an important force of opposition against the western empire, against foreign capital and its exploitation.



  • Few thoughts on this:

    1. I think it's normal to have doubts in any endeavor. It means you're reflecting on the state of things, not just going on autopilot. Which can be an opportunity to evaluate where things are at, how you're doing within it, how others are doing, if there's anything you think needs doing differently and if so, how to go about it.

    2. Pulling people over to ML theory and practice within countries that have a conscious awareness of it and vilify it in any way they can is undoubtedly challenging at times, but not impossible. You're working within a party, so you have a vehicle for getting people into practice, something that those who are just trying to persuade in an online space, for example, don't have, so that's already a plus. And you have a means with which to deliver information in an organized way. If you're concerned that things are being watered down too much in the process of growing the party, maybe there are ways you could help organize a revitalization of principles. I can't pretend to suggest specifics here on a context I know nothing about, but point I'm getting at is, there may be organized things you can do to push back against watering down while keeping the movement strong.

    3. Although I understand co-opting of movements and parties is a real concern, even when you're not pulling people all the way over to being principled MLs, you're still pulling them further from the cliff of the far right. And in the global fight against imperialism and its consequences, I would say that's better than having no sway over them at all. Maybe it'll take more imperialized countries liberating from the empire in order to weaken its hold at home, but even if it does, you can still be building and working toward the right conditions for a tipping point.

    Final thought: it's normal to be down sometimes and healthy to recognize it and process it. Moods come and go, but the struggle goes on. Kudos to you for the work you're doing.