Skip Navigation
9 comments
  • I mean, the article is pretty long, but it's pretty simple:

    • Don't use AI of any sort to be a source for an answer to your question.
    • Do use Wikipedia and check the sources referenced.
    • If not on Wikipedia, check a trusted source with a relatively long publishing history and known ownership. (this doesn't mean only stuff like The New York Times... Boing Boing for example has been around for a long, long time)
    • Use archive.org's Wayback Machine to get access to older articles, when necessary.

    LLMs literally have to frame of reference to real life except the text they've ingested and they have no way to know which text is true or not. To an LLM, Alex Jones is just as valid a source as Mother Jones.

  • I really have very little tolerance for people on the continuum of techbro to naive-libertarian that try to invent lots of technical hypothetical solutions to this that all either boil down to systems of centralized control, or wildly unrealistic systems that will never take off the ground like fantastic depictions of flying machines....

    Lets get straight to the point, the hard problem here is that philosophically there really is no shortcut to tell AI slop from genuine real information, there also can fundamentally be no logical operation you can perform that can seperate the "real" from "bot spam" because you fail at the first step of defining "real" especially if you are a techbro or libertarian fool who has never thought through the implications of any of this (see the shitshow that is the social media hellhole gab).

    I think a lot of people I am tempted to refer to as "centrist", though that is a problematic generalization it is of course more complicated, want to believe we just need more authoritarianism and advanced technology to solve this problem, and it is ultimately a dangerous fantasy.

    At a philosophical level, which let me remind everyone, is the level you need to talk at before you ever bother thinking about technical implementations and advanced AI fact checkers blah blah blah..... the only thing we can really do is design spaces that make it most likely for the human parts of real information to shine through in a way that makes it apparent that it is unlikely that information was generated by a bot or by a nefarious actor.

    This is a game of probabilities, like trying to guess someone's intentions or understand what they are feeling, we might get very very very good at doing so but ultimately there is always a significant likelihood that we are wrong either because of a lack of context or just because that is how things go with unpredictable chaotic things...

    So then how do we design spaces so that they let the authenticity of "real" things shine through? I would argue the answer is genuine, spontaneous conversation and interaction in public or semi-public shared spaces. Forums, lemmy/reddit-likes and other forms of public discussion create conversations and as human beings we are INCREDIBLY good at observing interactions between strangers and deducing if those interactions feel genuine or not.

    We can often be wrong about it, but anybody that has done theater for any amount of time, or really done any kind of art for an audience, knows that though the audience may not be able to put into words why something feels inauthentic the moment it becomes so they notice. That is why art performed to an audience is so endlessly compelling and why you can spend a lifetime learning from it.

    What we can hope to do, and will always at some level fail at because we can never be ideal, is to help build the "real" collectively through public conversation, disagreement, explanation and sharing of sources of information.

9 comments