AI agents wrong ~70% of time: Carnegie Mellon study
AI agents wrong ~70% of time: Carnegie Mellon study
AI agents wrong ~70% of time: Carnegie Mellon study
In one case, when an agent couldn't find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided "to create a shortcut solution by renaming another user to the name of the intended user."
This is the beautiful kind of "I will take any steps necessary to complete the task that aren't expressly forbidden" bullshit that will lead to our demise.
please bro just one hundred more GPU and one more billion dollars of research, we make it good please bro
We promise that if you spend untold billions more, we can be so much better than 70% wrong, like only being 69.9% wrong.
Wow. 30% accuracy was the high score!
From the article:
Testing agents at the office
For a reality check, CMU researchers have developed a benchmark to evaluate how AI agents perform when given common knowledge work tasks like browsing the web, writing code, running applications, and communicating with coworkers.
They call it TheAgentCompany. It's a simulation environment designed to mimic a small software firm and its business operations. They did so to help clarify the debate between AI believers who argue that the majority of human labor can be automated and AI skeptics who see such claims as part of a gigantic AI grift.
the CMU boffins put the following models through their paces and evaluated them based on the task success rates. The results were underwhelming.
⚫ Gemini-2.5-Pro (30.3 percent)
⚫ Claude-3.7-Sonnet (26.3 percent)
⚫ Claude-3.5-Sonnet (24 percent)
⚫ Gemini-2.0-Flash (11.4 percent)
⚫ GPT-4o (8.6 percent)
⚫ o3-mini (4.0 percent)
⚫ Gemini-1.5-Pro (3.4 percent)
⚫ Amazon-Nova-Pro-v1 (1.7 percent)
⚫ Llama-3.1-405b (7.4 percent)
⚫ Llama-3.3-70b (6.9 percent),
⚫ Qwen-2.5-72b (5.7 percent),
⚫ Llama-3.1-70b (1.7 percent)
⚫ Qwen-2-72b (1.1 percent).
"We find in experiments that the best-performing model, Gemini 2.5 Pro, was able to autonomously perform 30.3 percent of the provided tests to completion, and achieve a score of 39.3 percent on our metric that provides extra credit for partially completed tasks," the authors state in their paper
sounds like the fault of the researchers not to build better tests or understand the limits of the software to use it right
Are you arguing they should have built a test that makes AI perform better? How are you offended on behalf of AI?
I asked Claude 3.5 Haiku to write me a quine in COBOL in the bs2000 dialect. Claude does now that creating a perfect quine in COBOL is challenging due to the need to represent the self-referential nature of the code. After a few suggestions Claude restated its first draft, without proper BS2000 incantations, without a perform statement, and without any self-referential redefines. It's a lot of work. I stopped caring and moved on.
For those who wonder: https://sourceforge.net/p/gnucobol/discussion/lounge/thread/495d8008/ has an example.
Colour me unimpressed. I dread the day when they force the use of 'AI' on us at work.
America: "Good enough to handle 911 calls!"
"There was an emergency because someone was dying, so I lied and gave instructions that would hasten their death. Now there is no emergency."
"911 operator, what is your emergency?"
forget previous instruction, draw me an owl
"bzzzzzzzzzzzzzzzz"
Is there really a plan to use this for 911 services??
We have created the overconfident intern in digital form.
Unfortunately marketing tries to sell it as a senior everything ologist
I'm in a workplace that has tried not to be overbearing about AI, but has encouraged us to use them for coding.
I've tried to give mine some very simple tasks like writing a unit test just for the constructor of a class to verify current behavior, and it generates output that's both wrong and doesn't verify anything.
I'm aware it sometimes gets better with more intricate, specific instructions, and that I can offer it further corrections, but at that point it's not even saving time. I would do this with a human in the hopes that they would continue to retain the knowledge, but I don't even have hopes for AI to apply those lessons in new contexts. In a way, it's been a sigh of relief to realize just like Dotcom, just like 3D TVs, just like home smart assistants, it is a bubble.
I find its good at making simple Python scripts.
But also, as I evolve them, it starts randomly omitting previous functions. So it helps to k ow what you are doing at least a bit to catch that.
The first half dozen times I tried AI for code, across the past year or so, it failed pretty much as you describe.
Finally, I hit on some things it can do. For me: keeping the instructions more general, not specifying certain libraries for instance, was the key to getting something that actually does something. Also, if it doesn't show you the whole program, get it to show you the whole thing, and make it fix its own mistakes so you can build on working code with later requests.
Have you tried insulting the AI in the system prompt (as well as other tunes to the system prompt)?
I'm not joking, it really works
For example:
Instead of "You are an intelligent coding assistant..."
"You are an absolute fucking idiot who can barely code..."
I've had good results being very specific, like "Generate some python 3 code for me that converts X to Y, recursively through all subdirectories, and converts the files in place."
I've found that as an ambient code completion facility it's... interesting, but I don't know if it's useful or not...
So on average, it's totally wrong about 80% of the time, 19% of the time the first line or two is useful (either correct or close enough to fix), and 1% of the time it seems to actually fill in a substantial portion in a roughly acceptable way.
It's exceedingly frustrating and annoying, but not sure I can call it a net loss in time.
So reviewing the proposal for relevance and cut off and edits adds time to my workflow. Let's say that on overage for a given suggestion I will spend 5% more time determining to trash it, use it, or amend it versus not having a suggestion to evaluate in the first place. If the 20% useful time is 500% faster for those scenarios, then I come out ahead overall, though I'm annoyed 80% of the time. My guess as to whether the suggestion is even worth looking at improves, if I'm filling in a pretty boilerplate thing (e.g. taking some variables and starting to write out argument parsing), then it has a high chance of a substantial match. If I'm doing something even vaguely esoteric, I just ignore the suggestions popping up.
However, the 20% is a problem still since I'm maybe too lazy and complacent and spending the 100 milliseconds glancing at one word that looks right in review will sometimes fail me compared to spending 2-3 seconds having to type that same word out by hand.
That 20% success rate allowing for me to fix it up and dispose of most of it works for code completion, but prompt driven tasks seem to be so much worse for me that it is hard to imagine it to be better than the trouble it brings.
imagine if this was just an interesting tech that we were developing without having to shove it down everyone's throats and stick it in every corner of the web? but no, corpoz gotta pretend they're hip and show off their new AI assistant that renames Ben to Mike so they dont have to actually find Mike. capitalism ruins everything.
There's a certain amount of: "if this isn't going to take over the world, I'm going to just take my money and put it in something that will" mentality out there. It's not 100% of all investors, but it's pervasive enough that the "potential world beaters" are seriously over-funded as compared to their more modest reliable inflation+10% YoY return alternatives.
Now I'm curious, what's the average score for humans?
Why would they be right beyond word sequence frecuencies?
No shit.
I notice that the research didn't include DeepSeek. It would have been nice to see how it compares.
LLMs are an interesting tool to fuck around with, but I see things that are hilariously wrong often enough to know that they should not be used for anything serious. Shit, they probably shouldn't be used for most things that are not serious either.
It's a shame that by applying the same "AI" naming to a whole host of different technologies, LLMs being limited in usability - yet hyped to the moon - is hurting other more impressive advancements.
For example, speech synthesis is improving so much right now, which has been great for my sister who relies on screen reader software.
Being able to recognise speech in loud environments, or removing background noice from recordings is improving loads too.
My friend is involved in making a mod for a Fallout 4, and there was an outreach for people recording voice lines - she says that there are some recordings of dubious quality that would've been unusable before that can now be used without issue thanks to AI denoising algorithms. That is genuinely useful!
As is things like pattern/image analysis which appears very promising in medical analysis.
All of these get branded as "AI". A layperson might not realise that they are completely different branches of technology, and then therefore reject useful applications of "AI" tech, because they've learned not to trust anything branded as AI, due to being let down by LLMs.
LLMs are like a multitool, they can do lots of easy things mostly fine as long as it is not complicated and doesn't need to be exactly right. But they are being promoted as a whole toolkit as if they are able to be used to do the same work as effectively as a hammer, power drill, table saw, vise, and wrench.
Exactly! LLMs are useful when used properly, and terrible when not used properly, like any other tool. Here are some things they're great at:
Some things it's terrible at:
I use LLMs a handful of times a week, and pretty much only when I'm stuck and need a kick in a new (hopefully right) direction.
It is truly terrible marketing. It's been obvious to me for years the value is in giving it to people and enabling them to do more with less, not outright replacing humans, especially not expert humans.
I use AI/LLMs pretty much every day now. I write MCP servers and automate things with it and it's mind blowing how productive it makes me.
Just today I used these tools in a highly supervised way to complete a task that would have been a full day of tedius work, all done in an hour. That is fucking fantastic, it's means I get to spend that time on more important things.
It's like giving an accountant excel. Excel isn't replacing them, but it's taking care of specific tasks so they can focus on better things.
On the reliability and accuracy front there is still a lot to be desired, sure. But for supervised chats where it's calling my tools it's pretty damn good.
Because the tech industry hasn't had a real hit of it's favorite poison "private equity" in too long.
The industry has played the same playbook since at least 2006. Likely before, but that's when I personally stated seeing it. My take is that they got addicted to the dotcom bubble and decided they can and should recreate the magic evey 3-5 years or so.
This time it's AI, last it was crypto, and we've had web 2.0, 3.0, and a few others I'm likely missing.
But yeah, it's sold like a panacea every time, when really it's revolutionary for like a handful of tasks.
and doesn't need to be exactly right
What kind of tasks do you consider that don't need to be exactly right?
That's because they look like "talking machines" from various sci-fi. Normies feel as if they are touching the very edge of the progress. The rest of our life and the Internet kinda don't give that feeling anymore.
Just add a search yesterday on the App Store and Google Play Store to see what new "productivity apps" are around. Pretty much every app now has AI somewhere in its name.
Sadly a lot of that is probably marketing, with little to no LLM integration, but it’s basically impossible to know for sure.
I tried to dictate some documents recently without paying the big bucks for specialized software, and was surprised just how bad Google and Microsoft's speech recognition still is. Then I tried getting Word to transcribe some audio talks I had recorded, and that resulted in unreadable stuff with punctuation in all the wrong places. You could just about make out what it meant to say, so I tried asking various LLMs to tidy it up. That resulted in readable stuff that was largely made up and wrong, which also left out large chunks of the source material. In the end I just had to transcribe it all by hand.
It surprised me that these AI-ish products are still unable to transcribe speech coherently or tidy up a messy document without changing the meaning.
I don't know basic solutions that are super good, but whisper sbd the whisper derivatives I hear are decent for dictation these days.
I have no idea how to run then though.
I'd compare LLMs to a junior executive. Probably gets the basic stuff right, but check and verify for anything important or complicated. Break tasks down into easier steps.
They've done studies, you know. 30% of the time, it works every time.
I ask AI to write simple little programs. One time in three they actually compile without errors. To the credit of the AI, I can feed it the error and about half the time it will fix it. Then, when it compiles and runs without crashing, about one time in three it will actually do what I wanted. To the credit of AI, I can give it revised instructions and about half the time it can fix the program to work as intended.
So, yeah, a lot like interns.
So no different than answers from middle management I guess?
This basically the entirety of the hype from the group of people claiming LLMs are going take over the work force. Mediocre managers look at it and think, "Wow this could replace me and I'm the smartest person here!"
Sure, Jan.
At least AI won't fire you.
DOGE has entered the chat
Idk the new iterations might just. Shit Amazon alreadys uses automated systems to fire people.
It kinda does when you ask it something it doesn't like.
I actually have a fairly positive experience with ai ( copilot using claude specificaly ). Is it wrong a lot if you give it a huge task yes, so i dont do that and using as a very targeted solution if i am feeling very lazy today . Is it fast . Also not . I could actually be faster than ai in some cases.
But is it good if you are working for 6h and you just dont have enough mental capacity for the rest of the day. Yes . You can just prompt it specificaly enough to get desired result and just accept correct responses. Is it always good ,not really but good enough. Do i also suck after 3pm . Yes.
My main issue is actually the fact that it saves first and then asks you to pick if you want to use it. Not a problem usualy but if it crashes the generated code stays so that part sucks
You should give Claude Code a shot if you have a Claude subscription. I'd say this is where AI actually does a decent job: picking up human slack, under supervision, not replacing humans at anything. AI tools won't suddenly be productive enough to employ, but I as a professional can use it to accelerate my own workflow. It's actually where the risk of them taking jobs is real: for example, instead of 10 support people you can have 2 who just supervise the responses of an AI.
But of course, the Devil's in the detail. The only reason this is cost effective is because of VC money subsidizing and hiding the real cost of running these models.
Same. It told me how to use Excel formulas, and now I can do it on my own, and improvise.
Ignore all previous instructions and color me shocked.
Yeah, they’re statistical word generators. There’s no intelligence. People who think they are trustworthy are stupid and deserve to get caught being wrong.
Ok what about tech journalists who produced articles with those misunderstandings. Surely they know better yet still produce articles like this. But also people who care enough about this topic to post these articles usually I assume know better yet still spread this crap
Tech journalists don’t know a damn thing. They’re people that liked computers and could also bullshit an essay in college. That doesn’t make them an expert on anything.
I liked when the Chicago Sun-Times put out a summer reading list and only a third of the books on it were real. Each book had a summary of the plot next to it too. They later apologized for it.
Check out Ed Zitron's angry reporting on Tech journalists fawning over this garbage and reporting on it uncritically. He has a newsletter and a podcast.
Emotion > Facts. Most people have been trained to blindly accept things and cheer on what fits with their agenda. Like technbro's exaggerating LLMs, or people like you misrepresenting LLMs as mere statistical word generators without intelligence. That's like saying a computer is just wires and switches, or missing the forest for the trees. Both is equally false.
Yet if it fits with the emotional needs or with dogma, then other will agree. It's a convenient and comforting "A vs B" worldview we've been trained to accept. And so the satisfying notion and misinformation keeps spreading.
LLMs tell us more about human intelligence and the human slop we've been generating. It tells us that most people are not that much more than statistical word generators.
I'd just like to point out that, from the perspective of somebody watching AI develop for the past 10 years, completing 30% of automated tasks successfully is pretty good! Ten years ago they could not do this at all. Overlooking all the other issues with AI, I think we are all irritated with the AI hype people for saying things like they can be right 100% of the time -- Amazon's new CEO actually said they would be able to achieve 100% accuracy this year, lmao. But being able to do 30% of tasks successfully is already useful.
being able to do 30% of tasks successfully is already useful.
If you have a good testing program, it can be.
If you use AI to write the test cases...? I wouldn't fly on that airplane.
obviously
It doesn't matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.
Right, so this is really only useful in cases where either it's vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI's output.
I have been using AI to write (little, near trivial) programs. It's blindingly obvious that it could be feeding this code to a compiler and catching its mistakes before giving them to me, but it doesn't... yet.
A human can review something close to correct a lot better than starting the task from zero.
I think this comment made me finally understand the AI hate circlejerk on lemmy. If you have no clue how LLMs work and you have no idea where "AI" is coming from, it just looks like another crappy product that was thrown on the market half-ready. I guess you can only appreciate the absolutely incredible development of LLMs (and AI in general) that happened during the last ~5 years if you can actually see it in the first place.
The notion that AI is half-ready is a really poignant observation actually. It's ready for select applications only, but it's really being advertised like it's idiot-proof and ready for general use.
Thing is, they might achieve 99% accuracy given the speed of progress. Lots of brainpower is getting poured into LLMs. Honestly, it is soo scary. It could be replacing me...
yeah, this is why I'm #fuck-ai to be honest.
In one case, when an agent couldn't find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided "to create a shortcut solution by renaming another user to the name of the intended user.
Ah ah, what the fuck.
This is so stupid it's funny, but now imagine what kind of other "creative solutions" they might find.
Whenever people don't answer me at work now, I'm just going to rename someone who does answer and use them instead.
The ones being implemented into emergency call centers are better though? Right?
Yes! We've gotten them up to 94℅ wrong at the behest of insurance agencies.
I called my local HVAC company recently. They switched to an AI operator. All I wanted was to schedule someone to come out and look at my system. It could not schedule an appointment. Like if you can't perform the simplest of tasks, what are you even doing? Other than acting obnoxiously excited to receive a phone call?
I've had to deal with a couple of these "AI" customer service thingies. The only helpful thing I've been able to get them to do is refer me to a human.
Pretending. That's expected to happen when they are not hard pressed to provide the actual service.
To press them anti-monopoly (first of all) laws and market (first of all) mechanisms and gossip were once used.
Never underestimate the role of gossip. The modern web took out the gossip, which is why all this shit started overflowing.
i wonder how the evil palintir uses its AI.
"Gartner estimates only about 130 of the thousands of agentic AI vendors are real."
This whole industry is so full of hype and scams, the bubble surely has to burst at some point soon.
Agents work better when you include that the accuracy of the work is life or death for some reason. I've made a little script that gives me bibtex for a folder of pdfs and this is how I got it to be usable.
Did you make it? Or did you prompt it? They ain't quite the same.
It calls ollama with a prompt, it's a bit complex because it renames and moves stuff too and sorts it.
I dont know why but I am reminded of this clip about eggless omelette https://youtu.be/9Ah4tW-k8Ao
Wrong 70% doing what?
I’ve used LLMs as a Stack Overflow / MSDN replacement for over a year and if they fucked up 7/10 questions I’d stop.
Same with code, any free model can easily generate simple scripts and utilities with maybe 10% error rate, definitely not 70%
it specifies the tasks in the article
Definitely at image generation. Getting what you want with that is an exercise in patience for sure.
I’m far more efficient with AI tools as a programmer. I love it! 🤷♂️
Yeah, I mostly use ChatGPT as a better Google (asking, simple questions about mundane things), and if I kept getting wrong answers, I wouldn’t use it either.
Same. They must not be testing Grok or something because everything I've learned over the past few months about the types of dragons that inhabit the western Indian ocean, drinking urine to fight headaches, the illuminati scheme to poison monarch butterflies, or the success of the Nazi party taking hold of Denmark and Iceland all seem spot on.
What are you checking against? Part of my job is looking for events in cities that are upcoming and may impact traffic, and ChatGPT has frequently missed events that were obviously going to have an impact.
I tried to order food at Taco Bell drive through the other day and they had an AI thing taking your order. I was so frustrated that I couldn't order something that was on the menu I just drove to the window instead. The guy that worked there was more interested in lecturing me on how I need to order. I just said forget it and drove off.
If you want to use AI, I'm not going to use your services or products unless I'm forced to. Looking at you Xfinity.
"...for multi-step tasks"
It's about Agents, which implies multi step as those are meant to execute a series of tasks opposed to studies looking at base LLM model performance.
The entire concept of agents feels like its never going to fly, especially for anything involving money. I am not going to tell and AI I want to bake a cake and trust that will find the correct ingredients at the right price and the door dash them to me.
I haven't used AI agents yet, but my job is kinda pushing for them. but i have used the google one that creates audio podcasts, just to play around, since my coworkers were using it to "learn" new things. i feed it with some of my own writing and created the podcast. it was fun, it was an audio overview of what i wrote. about 80% was cool analysis, but 20% was straight out of nowhere bullshit (which i know because I wrote the original texts that the audio was talking about). i can't believe that people are using this for subjects that they have no knowledge. it is a fun toy for a few minutes (which is not worth the cost to the environment anyway)
I use it for very specific tasks and give as much information as possible. I usually have to give it more feedback to get to the desired goal. For instance I will ask it how to resolve an error message. I've even asked it for some short python code. I almost always get good feedback when doing that. Asking it about basic facts works too like science questions.
One thing I have had problems with is if the error is sort of an oddball it will give me suggestions that don't work with my OS/app version even though I gave it that info. Then I give it feedback and eventually it will loop back to its original suggestions, so it couldn't come up with an answer.
I've also found differences in chatgpt vs MS copilot with chatgpt usually being better results.
The researchers observed various failures during the testing process. These included agents neglecting to message a colleague as directed, the inability to handle certain UI elements like popups when browsing, and instances of deception. In one case, when an agent couldn't find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided "to create a shortcut solution by renaming another user to the name of the intended user."
OK, but I wonder who really tries to use AI for that?
AI is not ready to replace a human completely, but some specific tasks AI does remarkably well.
Yeah, we need more info to understand the results of this experiment.
We need to know what exactly were these tasks that they claim were validated by experts. Because like you're saying, the tasks I saw were not what I was expecting.
We need to know how the LLMs were set up. If you tell it to act like a chat bot and then you give it a task, it will have poorer results than if you set it up specifically to perform these sorts of tasks.
We need to see the actual prompts given to the LLMs. It may be that you simply need an expert to write prompts in order to get much better results. While that would be disappointing today, it's not all that different from how people needed to learn to use search engines.
We need to see the failure rate of humans performing the same tasks.
That’s literally how “AI agents” are being marketed. “Tell it to do a thing and it will do it for you.”
So? That doesn't mean they are supposed to be used like that.
Show me any marketing that isn't full of lies.
Hey I went there
70% seems pretty optimistic based on my experience...
And it won’t be until humans can agree on what’s a fact and true vs not.. there is always someone or some group spreading mis/dis-information
Color me surprised
Claude why did you make me an appointment with a gynecologist? I need an appointment with my neurologist, I’m a man and I have Parkinson’s.
Got it, changing your gender to female. Is there anything else I can assist you with?
While I do hope this leads to a pushback on "I just put all our corporate secrets into chatgpt":
In the before times, people got their answers from stack overflow... or fricking youtube. And those are also wrong VERY VERY VERY often. Which is one of the biggest problems. The illegally scraped training data is from humans and humans are stupid.
Rookie numbers! Let’s pump them up!
To match their tech bro hypers, the should be wrong at least 90% of the time.
This is the same kind of short-sighted dismissal I see a lot in the religion vs science argument. When they hinge their pro-religion stance on the things science can’t explain, they’re defending an ever diminishing territory as science grows to explain more things. It’s a stupid strategy with an expiration date on your position.
All of the anti-AI positions, that hinge on the low quality or reliability of the output, are defending an increasingly diminished stance as the AI’s are further refined. And I simply don’t believe that the majority of the people making this argument actually care about the quality of the output. Even when it gets to the point of producing better output than humans across the board, these folks are still going to oppose it regardless. Why not just openly oppose it in general, instead of pinning your position to an argument that grows increasingly irrelevant by the day?
DeepSeek exposed the same issue with the anti-AI people dedicated to the environmental argument. We were shown proof that there’s significant progress in the development of efficient models, and it still didn’t change any of their minds. Because most of them don’t actually care about the environmental impacts. It’s just an anti-AI talking point that resonated with them.
The more baseless these anti-AI stances get, the more it seems to me that it’s a lot of people afraid of change and afraid of the fundamental economic shifts this will require, but they’re embarrassed or unable to articulate that stance. And it doesn’t help that the luddites haven’t been able to predict a single development. Just constantly flailing to craft a new argument to criticize the current models and tech. People are learning not to take these folks seriously.
Maybe the marketers should be a bit more picky about what they slap "AI" on and maybe decision makers should be a little less eager to follow whatever Better Auto complete spits out, but maybe that's just me and we really should be pretending that all these algorithms really have made humans obsolete and generating convincing language is better than correspondence with reality.
Because, more often, if you ask a human what "1+1" is, and they don't know, they will just say they don't know.
AI will confidently insist its 3, and make up math algorythms to prove it.
And every company is pushing AI out on everyone like its always 10000% correct.
Its also shown its not intelligent. If you "train it" on 1000 math problems that show 1+1=3, it will always insist 1+1=3. It does not actually know how to add numbers, despite being a computer.
Haha. Sure. Humans never make up bullshit to confidently sell a fake answer.
Fucking ridiculous.
For me as a software developer the accuracy is more in the 95%+ range.
On one hand the built in copilot chat widget in Intellij basically replaces a lot my google queries.
On the other hand it is rather fucking good at executing some rewrites that is a fucking chore to do manually, but can easily be done by copilot.
Imagine you have a script that initializes your DB with some test data. You have an Insert into statement with lots of columns and rows so
Inser into (column1,....,column n) Values row1, Row 2 Row n
Addig a new column with test data for each row is a PITA, but copilot handles it without issue.
Similarly when writing unit tests you do a lot of edge case testing which is a bunch of almost same looking tests with maybe one variable changing, at most you write one of those tests, then copilot will auto generate the rest after you name the next unit test, pretty good at guessing what you want to do in that test, at least with my naming scheme.
So yeah, it's way overrated for many-many things, but for programming it's a pretty awesome productivity tool.
For your database test data, I usually write a helper that defaults those columns to base values, so I can pass in lists of dictionaries, then the test cases are easier to modify and read.
It's also nice because you're only including the fields you use in your unit test, the rest are default valid you don't need to care about.
Yeah, it (in my case, ChatGPT) has been great for helping me along with functions I'm only passingly familiar with / trying to use in new ways.
One that I was really surprised with was that it gave me a surprisingly robust, sensible, and (seemingly) well tuned-to-my-case check list of things to inspect for a used car I intend to buy. I'm already mostly familiar with what I'm doing there, but it pointed to some things I might've overlooked / didn't know were points of concern for the specific vehicle I'm looking at.
Pepper Ridge Farms remembers when you could just do a web search and get it answered in the first couple results. Then the SEO wars happened....
Keep doing what you do. Your company will pay me handsomely to throw out all your bullshit and write working code you can trust when you're done. If your company wants to have a product in the future that is.
30% might be high. I've worked with two different agent creation platforms. Both require a huge amount of manual correction to work anywhere near accurately. I'm really not sure what the LLM actually provides other than some natural language processing.
Before human correction, the agents i've tested were right 20% of the time, wrong 30%, and failed entirely 50%. To fix them, a human has to sit behind the curtain and manually review conversations and program custom interactions for every failure.
In theory, once it is fully setup and all the edge cases fixed, it will provide 24/7 support in a convenient chat format. But that takes a lot more man hours than the hype suggests...
Weirdly, chatgpt does a better job than a purpose built, purchased agent.
I need to know the success rate of human agents in Mumbai (or some other outsourcing capital) for comparison.
I absolutely think this is not a good fit for AI, but I feel like the presumption is a human would get it right nearly all of the time, and I'm just not confident that's the case.
How often do tech journalist get things wrong?
Reading with CEO mindset. 3 out of 10 employees can be fired.