Permanently Deleted

    • penguin_von_doom [she/her]
      arrow-down
      1
      ·
      4 years ago

      No, we shouldn't. But again, these algorithms derive whatever they derive from the data, automatically. Nobody is putting manually "race" or "gender" or things like that in the algorithm itself (at least in most). And this is where the trap lies, because these algorithms are neutral on their own, the people who use them get tricked into thinking tha the outcome is also neutral and objective, but forget that it is literally all determined by the data that goes in.

      • FloridaBoi [he/him]
        ·
        4 years ago

        They aren’t necessarily neutral since programming relative importance of characteristics may have implicit bias baked into the training of the algorithm. It’s not just a one way street of biased data being fed into it, the very structure of the training can include biases and accentuate them.

        • penguin_von_doom [she/her]
          ·
          4 years ago

          But that's the thing, most of the algorithms used mainstream do not program any relative characteristics, you do not program any characteristics at all. The algorithm learns all of these on its own from the data, and you only choose which features to include in this - and this is the source of bias, not the algorithm that decides how to split your decision tree ...