https://twitter.com/baumard_nicolas/status/1308715606196342784 The thread and a different thread by someone else explaining why this is okay https://twitter.com/beausievers/status/1309486084485779457 Here is one selected quote,

They then ask a set of broader questions, something like "What are the relationships between trustworthiness displays, survey measures of trust, and other cultural/sociopolitical factors, including GDP?"

All of this is coming from an incredibly euro-centric perspective and it's something that is not addressed enough or clearly at all which just further complicates things when they explain it's not actually racist.

  • Civility [none/use name]
    ·
    4 years ago

    So, having read the paper, their methodology is completely fucked.

    What they’ve done is taken a machine learning algorithm that rates the “trustworthiness” and “dominance” of photos of human faces, applied it to portraits from the 16th century to 21st century, showed that the machine learning algorithm rates the portrait faces as getting more trustworthy over time and primarily, as GDP increased and claims this as evidence that people living in places with higher GDP are more trustworthy.

    The first glaring flaw in their methodology is that their machine learning algorithm doesn’t actually predict the trustworthiness of the faces. What it does is predict the trustworthiness rating a white 1990-early 2000’s US college student would give that face, and it was trained on the ratings they’d give photos and cgi avatars, not portraits.

    The second is that they in no way address the inherent preselection in that these aren’t generic members of the populations they’re running this algorithm on, people who have portraits painted of them are a very skewed portion of the population wrt how they present socially.

    The third is that they assume that people being painted as appearing more trustworthy meant they actually appeared more trustworthy, not just they were painted that way. They claim that because increase in the algorithm’s perceived trustworthiness correlates more strongly with GDP than time it must be so, ignoring the possibility that people in higher GDP times might want to appear more trustworthy in portraits than in lower GDP/capita societies or that there were probably more good vanity portrait painters in higher GDP/capita societies than low ones.

    The fourth and most glaring is that they assume perceived trustworthiness (what the algorithm is rating) is equivalent to actual trustworthiness.

    The fifth is, that they noticed a correlation between perceived “dominance” and perceived “trustworthiness” and so decided to correct for dominance??? to negate social bias??? as if what they were looking at wasn’t completely social bias based???

    They then go on to justify and support their conclusions by referring to a body of work about “scarcity” and “abundance” psychology.

    The research they did is kind of interesting “people in higher GDP/Capita societies who had portraits of them painted were painted in ways that seemed more trustworthy to modern white US college kids when controlled for how dominant those modern white US college kids’ perceived those portraits to be” is a cool thing to know, but it in no way supports their claims that people in rich societies are more trustworthy and it’s extremely bad science to suggest it does.

    • JoesFrackinJack [he/him]
      hexagon
      ·
      4 years ago

      The first glaring flaw in their methodology is that their machine learning algorithm doesn’t actually predict the trustworthiness of the faces. What it does is predict the trustworthiness rating a white 1990-early 2000’s US college student would give that face, and it was trained on the ratings they’d give photos and cgi avatars, not portraits.

      Exactly what I was trying to say! Thank you. And the rest of what you wrote made much more sense of it than I could.