What have you found to be an effective way to tell if you're chatting with a bot or a real person?
What have you found to be an effective way to tell if you're chatting with a bot or a real person?
Sometimes it can be hard to tell if we're chatting with a bot or a real person online, especially as more and more companies turn to this seemingly cheap way of providing customer support. What are some strategies to expose AI?
You're viewing a single thread.
I've found that for chatGPT specifically;
- it really likes to restate your question in its opening sentence
- it also likes to wrap up with a take-home message. "It's important to remember that.."
- it starts sentences with little filler words and phrases. "In short," "that said," "ultimately," "on the other hand,"
- it's always upbeat, encouraging, bland, and uncontroversial
- it never (that I've seen) gives personal anecdotes
- it's able to use analogies but not well. They never help elucidate the matter
28 0 Replyit starts sentences with little filler words and phrases. “In short,” “that said,” “ultimately,” "on the other hand,"
Yeah Chat GPT writes like a first-year undergrad desperately trying to fulfil the word count requirement on an essay.
25 1 ReplyWhich works out because a lot of first-year undergrads are probably using it for that purpose
13 0 ReplyYeah I'd hate to be marking/grading student essays these days.
At least when you're reading a website you can just click away once you realise who wrote it.
4 0 ReplyNah, just get chatGPT to grade them too.
3 0 Reply
First years have max word counts now, not minimums. That's more a highschool thing.
4 0 ReplyThe universities I've been at had a specific word count to aim for, rather than max/min.
And anything more than 10% over or under it was penalised.
It makes more sense because if you're writing for publication they use target approx wordcount.
7 0 ReplyLast time I talked about this with the other TAs, we ended up coming to the conclusion that most papers that were decent were close to the max word count or above it (I don't think the students were really treating it as a max, more like a target). Like 50% of the word count really wasn't enough to actually complete the assignment
3 0 ReplyTotally, good assessment design matches the rubric with an appropriate length, so it's hard for them to fulfill it well if they don't take the space.
As for the maxed out ones, iirc I tended to just rule a line at the 110% mark and not read/mark anything past it.
I know that's a bit uncaring, but it's an easy way to avoid unfairly rewarding overlength, and the penalty sort of applied itself.
2 0 Reply
Chat GPT, yeah. Something like KremlinGPT has different parameters.
4 19 Replyi cant see why this is downvoted cus 1 - you’re probably not wrong and 2 - KremlinGPT is funny as fuck 😭
2 1 ReplyI think lemmy.ml and lemmygrad.ml are probably both swarming with CCP/Kremlin bots
1 17 Reply