As climate change escalates, several things become clear to me:

  1. The only thing capitalists are consistently competent at is fucking over their workers
  2. The industry doesn’t have the impulse control to slow its own demise

Most of the consequences of this are horrific, widespread, and harrowing to consider. This one’s a little different. I wanted to get some opinions from some devops folks who are more knowledgeable about the specifics of their work than I am.

Here’s the basic pitch. As climate change accelerates, freak weather patterns which span continents will become more common, as will infrastructure failure. We know that a key aspect of reliable offsite backups is to have proper separation of dependent systems. So if you think your servers are completely “redundant” because they run on separate VLANs and electrical circuits, but are housed in the same data center, sucks for you when the power goes out for longer than your generators can last. This is a major reason we have offsite backups. Trillions of dollars of data is stored on magnetic tapes all over the world (is this still the standard medium? Idk). But as catastrophic weather and infrastructure failure becomes more common, these separated systems become entangled again. If your offsite backups are on the east coast and your data center is on the west coast, having one building burn down in a wildfire while the other experiences rolling blackouts is not a great situation to have to prepare for.

So I’m curious how behind the curve I am on this, whether there are people at cons talking about this, etc. I assume the big players have contingency plans for this sort of thing, but a lot of companies don’t. Just another example of how capital can’t help but create the conditions of its collapse.

  • CheGueBeara [he/him]
    ·
    2 years ago

    The only way to feel confident in backups is to have at least 2 hosted by different companies / people on opposite sides of the planet and to test them both regularly. If one gets wiped out, the other is still fine.

    3 is much better. If you only have 2 backups and 1 fails, you've just pinned all of your hopes on a single copy. Better to still have 2 backups in the case where one fails.

    Believe it or not this isn't a joke you should actually do this lol

  • BigLadKarlLiebknecht [he/him, comrade/them]
    ·
    2 years ago

    Many many years ago I worked doing sysadmin for a large pharmaceuticals company. This was pre-cloud era, so they had (and probably still have) a large data center for their Europe/Middle East/Africa operations. One day, hundreds of servers went offline as power distribution for dozens of racks went out. They all had uninterruptible power supplies, and there was a duplicate power distribution path in case one went out, with separate mains supply. The whole 9 yards.

    Except someone had effecfively plugged everything into the same power distribution path twice.

    Hundreds of thousands spent on that redundancy and someone basically fucked it by plugging it in wrong :data-laughing:

    I’m pretty sure most large enterprises are closer to utter disaster than they realize, they just haven’t been actually stress tested outside of known scenarios.

    • MendingBenjamin [they/them]
      hexagon
      ·
      2 years ago

      I’m pretty sure most large enterprises are closer to utter disaster than they realize, they just haven’t been actually stress tested outside of known scenarios.

      This is my intuition, especially as things trigger domino effects. I could definitely see an IT equivalent of the supply chain failure happening in the next decade. Not the exact same mechanisms of course, but just things being more interdependent than was expected and no one keeping an eye on the system as a whole

      • BigLadKarlLiebknecht [he/him, comrade/them]
        ·
        2 years ago

        :100-com: as someone that now works at godforsaken startups, it’s a fucking miracle any of these apps work, even as badly as they do. Years upon years of people taking shortcuts to get promotions, vest their equity and leave to move onto the next thing. Rinse and repeat. There’s no resilience or flexibility in many codebases more than a couple of years old. At my current company that has hundreds of engineers, there’s one or two people who approximately understand how our core business logic works.

        What’s fun is that if the gears of privatisation keep turning, more and more parts of the state will be entrusted to these Rube Goldberg machines of careerism. Especially when Artificial Intelligence woo is sold as a means of addressing the many vectors of collapse. I honestly hope there’s a widespread technological disaster before too long, from that perspective.

    • PigPoopBallsDotJPG [none/use name]
      ·
      2 years ago

      Several datacenters here in the EU do regular blackout-tests, where they actually switch a power feed to the backup generators, in order to verify that things actually fail over like they should.

    • MendingBenjamin [they/them]
      hexagon
      ·
      2 years ago

      A proper jubilee would have to be executed like a fucking heist. But having just one or two sysadmins sign onto the cause could make it possible. They order the destruction of the off sites at the same time a bunch of people turn off the power and start popping out SSDs and smashing them with hammers. If you do it well, you could probably done in half an hour before the higher ups even properly know what’s going on

  • Cummunism [they/them, he/him]
    ·
    2 years ago

    afaik, tape storage is probably still the best long term storage. the main issue the storing it, temperature control and whatnot. i know there are "archival" blurays that hold like 100GB though so maybe those are better now. im not sure where most backup sites are, but ive seen some and they are in spots and in buildings that there is almost no way the equipment would suffer. infrastructure could go down but i think in this case the important part is keeping the equipment and data safe. ive not seen a datacenter in a flood zone or something.

    • MendingBenjamin [they/them]
      hexagon
      ·
      2 years ago

      The one data center I used to work at was at one of the highest spots in the tri city area and still had a trench dug around it to divert water away. I believe they also had several towers around the perimeter to act as lightning rods and enough space to build a solar farm if need be. It’s pretty clear the designers of real high end places take natural disasters into account, but I don’t know how resilient the whole thing is

      • Cummunism [they/them, he/him]
        ·
        2 years ago

        there is a cost difference between facilities im sure, but that just means that the actual important data is most likely the most securely backed up. i dont know if i fully understand what sort of catastrophe you mean though. Im thinking of something like a datacenter gets cut off because of a flood, but the equipment and data is still safe somehow, maybe youre thinking of some end of the world shit which at that point it won't really matter.

        • MendingBenjamin [they/them]
          hexagon
          ·
          2 years ago

          Maybe I’m mixing up the redundancy of availability with the redundancy of backups

    • Mardoniush [she/her]
      ·
      2 years ago

      Optimum pure storage is indeed bluray (or for "I hope the archeologists can read this" levels of critical data, record discs)

      • xXthrowawayXx [none/use name]
        ·
        2 years ago

        Have you been keeping up with the cd and laserdisc community? Lots of em have been reporting failures on optical disc media for over a decade now. One of the worst types is delaminating where the substrate is bonded to the plastic, around the edge. It’s most common on recordable media, but certain cd pressing plants are bad for it too.

        Of course laserdisc has bit rot, but they’re falling apart in new and interesting ways now too. Paints and dyes used on the sleeves are messing up the media and they’re big enough around to pick up a warp if not stored either upright or flat.

        What’s the prognosis on blu ray? Does it have any longevity engineered in?

        • Mardoniush [she/her]
          ·
          edit-2
          2 years ago

          Yeah, I've seen that. Modern archival-specific discs are mostly immune to these issues, though they are expensive. Blu rays in general don't have the layer of organic material on (most) of them, but I've seen reports of some Archea getting to them and degrading the metal nitride.

          You also have to store any sort of archival stuff properly of course, regardless of medium. You can throw a tungsten tablet with words carved on it into a damp basement and it'll have a bad time in a century or two.

          Cool, dry, humidity controlled, power independent storage. Preferably somewhere high and geologically stable. No light if possible, red light if needed. Store upright, use inert casing if you can.

          This is for "store and forget" stuff no one might look at for a decade or more, realistically you should be copying and where possible updating formats every 5 years or less.

          Source: work as an occasional (unqualified) archivist for casual stuff, know a professional closely

  • Mardoniush [she/her]
    ·
    edit-2
    2 years ago

    And this is why I have multiple offsite backups on different continents. If they all go down we have bigger issues. Even then, they're physically accessible by us so someone can just throw the HDDs into a truck.