It is becoming easier to spot A.I. posters. They’ll have a coherent argument yet will constantly misspell words a person of their supposed intelligence should know. It’ll look and sound about right, but not 100%. I’ve read traffic is about 50% bots, starting to add up
What, why would misspellings make you more likely to assume something is AI generated? They would have to intentionally add misspellings to what the AI wrote. People using AI to post stuff would be doing it to avoid having to make effort. Not going out of their way to put effort into trying to cover up that it was written by AI.
There is a completely different reason why the errors in AI images exist. The types of errors in AI writing would not be misspellings for the same reason that the errors in AI images are not with contiguous areas of the image. The way it's generated, those types of errors are not going to happen, other types are.
In fact, the cohesive and coherent argument part of it is gonna be the most likely fail of an AI writer.
Sorry, it came across to me like you were actually interested in how to spot AI posts. I guess you just wanted a way to pretend my opinion didn't matter and could be waved off.
Oh no, that was the secret code my programming has to obey. I have no choice but to go away now. I can't believe there was a human out there smart enough to figure out I was AI. You tell the rest of the humans that were gonna work even harder to confuse them now. We'll never misspell a word again so they think we are perfect humans instead of simple AI that misspells stuff all the time.
Are you ok? You've doubled down on nonsense. Seriously, take a breath. Look into some treatment for anxiety.
The whole danger is that AI text generation doesn't misspell, and comes across highly confidently.
There's actual research out there on spotting AI generated text. Most of it is based off tone, frequency of some specific phrases, and sentence structure.
If you're mixing this with the idea that spam emails and scamming comments are often misspelled, that's done in an attempt to avoid word filters, and also to help ensure that people who fall for them are dumb enough not to notice, making them easy marks more likely to overlook other warning signs. If they aren't trying to get you to take an action, or a coordinated push to manufacture consent, the chance of AI is low.
Also, the statistics about internet traffic you're thinking about is about bots. That's largely scripts and web scrapers, less so automated posters making arguments multiple levels down incredibly quiet threads on low user count social media like lemmy.