Red Markets is a game about poverty a zombie apocalypse where the US federal government has written off the western states and the people that live there as "The Loss" and attempts to enforce a sense of "normality" on the remainder of the country (although the game takes place in The Loss and I don't think that the eastern US has that much detail on it.)
I use diffusion models a fair bit for VTT assets for TTRPGs. I've used LLMs a little bit for suggesting solutions for coding problems, and I do want to use one to mass produce customized cover letter drafts for my upcoming job hunt.
Neither model class is sufficiently competent for any zero-shot task yet, or at least has too high of a failure rate to run without active supervision.
As for use in a socialist society, even the current version of the technology has some potential for helping with workers' tasks'. Obviously, it would need to be rationed per its actual environmental and energy costs as opposed to the current underwriting by VCs. You'd also want to focus on specialized models for specific tasks, as opposed to less efficient generalized models.
I wouldn't be surprised if they're using a variant of their 2B or 7B Gemma models, as opposed to the monster Gemini.
The LLM is just summarizing/paraphrasing the top search results, and from these examples, doesn't seem to be doing any self-evaluation using the LLM itself. Since this is for free and they're pushing it out worldwide, I'm guessing the model they're using is very lightweight, and probably couldn't reliably evaluate results if even they prompted it to.
As for model collapse, I'd caution buying too much into model collapse theory, since the paper that demonstrated it was with a very case study (a model purely and repeatedly trained on its own uncurated outputs over multiple model "generations") that doesn't really occur in foundation model training.
I'll also note that "AI" isn't a homogenate. Generally, (transformer) models are trained at different scales, with smaller models being less capable but faster and more energy efficient, while larger flagship models are (at least, marketed as) more capable despite being slow, power- and data-hungry. Almost no models are trained in real-time "online" with direct input from users or the web, but rather with vast curated "offline" datasets by researchers/engineers. So, AI doesn't get information directly from other AIs. Rather, model-trainers would use traditional scraping tools or partner APIs to download data, do whatever data curation and filtering they do, and they then train the models. Now, the trainers may not be able to filter out AI content, or they may intentional use AI systems to generate variations on their human-curated data (synthetic data) because they believe it will improve the robustness of the model.
EDIT: Another way that models get dumber, is that when companies like OpenAI or Google debut their model, they'll show off the full-scale, instruct-finetuned foundation model. However, since these monsters are incredibly expensive, they use these foundational models to train "distilled" models. For example, if you use ChatGPT (at least before GPT-4o), then you're using either GPT3.5-Turbo (for free users), or GPT4-Turbo (for premium users). Google has recently debuted its own Gemini-Flash, which is the same concept. These distilled models are cheaper and faster, but also less capable (albeit potentially more capable than if you trained model from scratch at that reduced scale).
I mean outside of the Labor party.
Meh. Was hoping that he'd throw support behind a broader left-wing challenge to Starmer, but I guess it'd be good for him to remain in parliament.
I'm not saying it shouldn't be legalized or that those other benefits aren't present, I'm saying that cure-alls/panaceas don't exist and that legalization advocates in the past have indulged in that sort of myth-making.
Hemp/cannabis certainly has benefits, but a lot of those benefits have been exaggerated to support decriminalization/legalization. When it comes to medicine, cannabis has benefits as a non-opioid analgesiac/painkiller, so that's obviously a huge boon for chronic pain where the risk of opioid addiction and another side-effects are a major concern. However, I would be skeptical of claims of healing properties of cannabis or any other proposed panacea.
Outside an atmosphere like Earth's, everything is already exposed to intense ionizing radiation from the sun/stars. A bit more from an RTG, even a big one, is a drop in the ocean. If we found signs of extraterrestrial life, then we'd want to be extra cautious about not sterilizing by accident, but that's not currently a major concern. And of course, any sort of nuclear rocket propulsion would need to be handed with utmost care, but it's also not a major issue once it's outside the atmosphere.
To get the data to build and validate that type of model, you would need to study a lot more living brains than these neuralink experiments. Like, orders on orders of magnitude more. You can't really model this sort of thing from first principles.
There isn't a capability to model a complete living brain in silico, so not really at this time.
The proposal is for a globally-levied tax. Where exactly is capital going to fly to?
Season 2 is good, although there aren't really any big shake-ups in the formula. If you liked season 1, and you wouldn't mind some more, then go for it.
I saw one of these things in the wild this week. A good tenth of regular cars in town have rotted-out wheel wells due to road salt, so it'll be funny to look for them next spring.
Why are they teasing this as a secret game, when it's either EU5, or one of their ill-fated gaiden games (March of Eagles, Sengoku, etc...) which I though PDX stopped making.
Surprised that Microsoft is letting them compete in search engines while they're spending boatloads of money/compute on trying to make Bing relevant.
No, ninjas were just mercenary peasant-warriors/spies.
Samurai: feudal landlords
Ninjas: basically just peasants.
Gotta go with the working peoples.
This is actually not true. The use of AI media in the Chinese entertainment industry is just as pervasive and probably more so than the US, and Chinese universities and private firms are developing their own AI image/video generators at an equivalent pace to the Western firms. For example you have Chinese-developed SOTA DiT txt2img models like Pixart, Hunyuan and Lumina, and even SOTA video models like Kling. Tencent, Alibaba and Bytedance are putting out various models, optimizations and distillations in this space as well. Even back in April of last year, there were articles indicating a 70% decline in illustration jobs in sectors like video game development.