Humans cannot recognize AI-written reviews - human reviews were often mistaken for AI-generated reviews, and even more frequently, AI-generated reviews were mistaken for human reviews
Online reviews serve as a guide for consumer choice. With advancements in large language models (LLMs) and generative AI, the fast and inexpensive creation of human-like text may threaten the feedback function of online reviews if neither readers nor platforms can differentiate between human-written...
Seems like good old Word Of Mouth where information is relayed among people who know each other in real life, is probably going to make a big comeback for this kind of stuff.
Because if you can't trust any media on the internet being "real" the only trustworthy sources you'll have is the real people in your real life.
I don't read five star reviews ever anymore. If I want to find a believable endorsement of a product, I'll look for a four-star review that contains a criticism that isn't that bothersome to me personally, but legitimate enough that I can imagine a customer who would be deterred by it.
We moved a year ago, and I found my favorite pizza guy, Tony, by maybe the most convincing online review I've ever read. The most recent review on google maps was a one-star that was basically like "I met Tony and he casually used foul language etc etc there is no need for profanity etc pizza was some of the best I've ever had though"
I read all the one star reviews. If they are all something akin to "my food was too colorful" or "the waitress didn't refill my water enough" then it's probably ok
I love when owners give a sassy response, or when a one star review tells a story about the owner that makes me laugh.
My favorite local pizza shop is several generations family run, everything from scratch, and everything is done a certain way. I know everyone who works there and has ever worked there, and when me and the boys read this review, even though we all moved away, we knew it was true because it's completly on brand.
Best Pizza going, but their menu is on bristle board and they don't do complicated orders.
Tony's great. He does a thing he calls "Detroit style stuffed pizza" which does not really seem to be a Detroit style pizza at all but it's fantastic nonetheless.
A lot of people like his sandwiches and visually they look very appetizing, but for whatever reason they don't hit the spot for me. His pizzas are spectacular, and good breadsticks and wings too.
Yeah, reviews are relatively easy to fake with current technology. They're short and most of them follow a fairly limited set of formats. This isn't like generating hands where there are a ton of ways for an AI to give itself away. Not that most humans are very good at drawing hands.
Often times reviews are written by people with English not as a first language, or the reviews are machine translated (nowadays with AI itself). Many AIs use real reviews as templates to train on, so its not surprising to me that the differences aren't easily spotted. The main tells are when it uses too flowery language and tone and doesn't get to the point.
To me it seems similar to if you heard an electronic speaker playing a bird call vs. a real bird call, could you tell the real one from a distance?
If you were an expert birdwatcher you could probably tell easier. If the speaker repeated the exact same call on a loop you could tell, if you were in earshot of electronic buzzing in the background you could also, but depending on how sophisticated the speaker is setup (like delay and variety of calls, you might not.
"Interestingly, this effect cannot be explained by differences in participants’ experience with generative AI models, as that variable is insignificant in the mode"
When predictors are correlated, which is most likely the case here, this analysis cannot separately estimate their effects. The software will end up splitting the total effect size between the two predictors. Without describing collineariry between predictors, it's not possible here to judge whether experience with AI is truly unimportant or the analysis is merely incapable of spotting the effect.
As for eroding confidence in reviews, this will make it worse, but I already put next to no stock in user reviews anymore. You don't need AI to make a good human-like review that lies about a product, and there are plenty of those around.