Skip Navigation
45 comments
  • Although nonstandard English and pidgins often demonstrate the same level of nuance and complexity as standard English, it's very common for there to be negative stereotypes. One has to wonder whether the LLMs generated from (stolen en masse) written output say as much about us as they do about their creators.

    • Pretty much, it was trained on human writing, then people are all surprised when it has human biases.

      • An LLM needs to evaluate and modify the preliminary output before actually sending it. In the context of a human mind that’s called thinking before opening your mouth.

  • Yeah it turns out when your entire tech industry is dominated by cishet white techbros and the entire foundation of their education and the production of such models is based on that then you get racist as fuck outcomes from any given algorithm that is a product of that same set of normative standards.

    If you have the time I highly recommend reading Palo Alto by Malcolm Harris, it's a great primer on how all this shit got started and why we should frankly just burn Silicon Valley to the ground.

45 comments