Skip Navigation
49 comments
  • Err, yeah, I get the meme and it's quite true in its own way...

    BUT... This research team REALLY need an ethics committee. A heavy handed one.

    • As much as I want to hate the researchers for this, how are you going to ethically test whether you can manipulate people without... manipulating people. And isn't there an argument to be made for harm reduction? I mean, this stuff is already going on. Do we just ignore it or only test it in sanitized environments that won't really apply to the real world?

      I dunno, mostly just shooting the shit, but I think there is an argument to be made that this kind of research and it's results are more valuable than the potential harm. Tho the way this particular research team went about it, including changing the study fundamentally without further approval, does pose problems.

  • That story is crazy and very believable. I 100% believe that AI bots are out there astroturfing opinions on reddit and elsewhere.

    I'm unsure if that's better or worse than real people doing it, as has been the case for a while.

    • Belief doesn't even have to factor; it's a plain-as-day truth. The sooner we collectively accept this fact, the sooner we change this shit for the better. Get on board, citizen. It's better over here.

      • I worry that it's only better here right now because we're small and not a target. The worst we seem to get are the occasional spam bots. How are we realistically going to identify LLMs that have been trained on reddit data?

    • What is likely happening is that bots are manipulating bots

49 comments