Skip Navigation

AI Summaries of Articles

Most people do not read the article link that's posted. So I put an AI summary of the link as a comment, but as a spoiler so if you don't want to engage with it you don't have to and also the full article so people can more accessibly read the article. Also as a spoiler so it doesn't take up a full page of a comment. It got removed by a mod as AI slop.

I could use AI on a headline and you would never know the difference. I could just say it's my own summary also probably wouldn't know the difference. Punishing people for being transparent about using LLMs who are not forcing the reader to engage with them is a net positive and a good practice to teach. The opposite is people still use them and just pretend they aren't.

41 comments
  • honesty is only a virtue unalloyed. the goal is to eradicate AI slop in this space. why would we allow it under the pretense of 'at least they admit it?' that's not the goal. the goal is to remove it entirely. when it's detected, it should be gone.

    it is also not at all an accessibility aid. as the exact demographic of person (rather severe presentation of ADHD) who would be supposedly most aided by this, as well as being a data science major, I wholeheartedly reject the idea that it in any way meets an acceptable standard for constituting that. the average person genuinely doesn't know the sheer amount of subtle fuckups and misinformation these diceroll plagiarism boxes output even when provided the exact text they are supposed to paraphrase. rather, its main effect–due to them 'seeming right'–is a disinformative capacity, encouraging people to skip the article and defer to the generated 'summary.' I simply do not think this is a sound argument.

    just write the summary yourself. I assume you've read the article. It can be a paragraph. let's say you don't want to. we can access the text. we can access these chatbots. we can toss the article at the chatbots on our own time. I don't want AI slop on this forum at all and oppose the normalization of it, especially under flimsy pretenses such as this.

  • I don't feel that any potential benefits of the bazinga plagiarism machine outweigh the very obvious downsides, like how the outputs are often completely wrong and the massive energy consumption and environmental impact the AI industry runs on.

  • As an Article Reader:

    Are you verifying the "AI" is spitting out something legible and actually carrying the spirit of the source material?

    If so, how much more effort is that than typing up your own summary, eliminating the uncertainty of the "AI"?

    • It has less to do with me and more what can and will happen. People will say they wrote it when they indeed did not.

      I think having set boundaries around AI is more helpful than tasking mods with what they believe is AI. I've seen people on here reply with AI generations and say it's AI and don't really have a problem with it and find it actually to be a good practice. I just think we should go a step further and actually put it behind a spoiler so people who don't want to engage with it don't have to.

      Removing it entirely will just mean it's gets posted without people saying it's AI and without spoiler tags.

      • Removing it entirely will just mean it's gets posted without people saying it's AI and without spoiler tags.

        "Sometimes its hard to tell if something is AI therefore we should stop moderating AI entirely" is not the winning argument you think it is.

        I noticed that you haven't addressed any of the environmental concerns people have brought up in this thread. Putting everything else aside, doesn't the effect on the climate bother you in the slightest? Don't you feel that alone is a very compelling argument for refusing to engage with or promote it?

  • I'm glad to hear you have found a way to use AI that isn't stealing the data and work of other people and is also using all of the extra green energy capacity we have thereby not contributing to global warming at all.

    If you think that "but it's okay if you don't want to see it" is a good argument in favor of this you've got some more thinking to do.

    • Yes on device LLMs are using so much power my device depleted energy from this one summary. And what work is being stolen from a summary?

      Somehow this site has turned from don't blame individuals for global warming to anyone who even downloads R1 doesn't care about humanity.

  • eh, i can run it through my own llm if i feel like it. i usually dont.

41 comments