Skip Navigation

Anyone actually seeing AI to do the jobs of tech workers?

I saw another article today saying how companies are laying off tech workers because AI can do the same job. But no concrete examples... again. I figure they are laying people off so they can pay to chase the AI dream. Just mortgaging tomorrow to pay for today's stock price increase. Am I wrong?

78 comments
  • Do the job? No. Noticeably increase productivity, and reduce time spent on menial tasks? Yes.

    I suspect the layoffs are partly motivated by the expectation that remaining workers will be able to handle a larger workload with the help of AI.

    US companies in particular are also heavily outsourcing jobs overseas, for cheaper. They just don't like to be transparent about that aspect, so the AI excuse takes the focal point.

    • I agree completely.

      We have an AI bot that scans the support tickets that come in for our business.

      It has a pretty low success rate of maybe 10% or 20% accuracy in helping with the answer.

      It puts its answer into the support ticket it does not reply to the customer directly. That would be a disaster.

      But 10% or so of our workload has now been shouldered off to the AI, which means our existing team can be more efficient by approximately 10%.

      It's been relatively helpful in training new employees also. They can read what the AI suggests and see if it is correct or not. And in learning if it is correct or not, they are learning our systems.

      • They can read what the AI suggests and see if it is correct or not.

        What's this process look like? Or are there any rails that prevent the new employee from blinding trusting what the AI is suggesting?

      • That’s also true when processing bills. The AI can give you suggestions, which often require some tweaking. However, some times the proposed numbers are spot on, which is nice. If you measure the productivity of a particular step in a long process, I would estimate that AI can give it a pretty good boost. However, that’s just one step, so by the end of the week, the actual time savings are really marginal. Well, better than nothing, I guess.

    • reduce time spent on menial tasks

      Absolutely. It's at the level where it can throw basic shit together without too much trouble, providing there is a competent human in the workflow to tune inputs and sanitise outputs.

      • I use it to write my PR descriptions, generate class and method docstrings, notate code I'm trying to grok or translate, etc and so forth. I don't even use it to actually generate code, and it still saves me likely a couple hours a week.

  • "Tech workers" is pretty broad.

    Tech Support

    There are support chatbots that exist today that act as a support feature for people who want to ask English-language questions rather than search for answers. Those were around even before LLMs, could work on even simpler principles. Having tier-1 support workers work off a flowchart is a thing, and you can definitely make a computer do that even without any learning capability at all. So they definitely can fill some amount of role. I don't know how far that will go, though. I think that there are probably going to be fundamental problems with novel or customer-specific issues, because a model just won't have been trained on it. I think that it's going to have a hard time synthesizing an answer from answers to multiple unrelated problems that it might have in its training corpus. So I'd say, yeah, to some degree, and we've successfully used expert systems and other forms of machine learning in the past to automate some basic stuff here. I don't think that this is going to be able to do the field as a whole.

    Writing software

    Can existing LLM systems write software? No. I don't think that they are an effective tool to pump out code. I also don't think that the current, "shallow" understanding that they have is amenable to doing so.

    I think that the things that LLMs work well at is in producing stuff that is different, but appears to a human to be similar to other content. There are a variety of uses that that work, to varying degrees, for content consumed by humans.

    But humans deal well with errors in what we see. The kinds of errors in AI-generated images aren't a big issue for us -- they just need to cue up our memories of things in our head. Programming languages are not very amenable to that. And I don't think that there's a very effective way to lower that rate.

    I think that it might be possible to make use of an LLM-driven "warning" system when writing software; I'm not sure if someone has done something like that. Think of something that works the way a grammar checker does for natural language. Having a higher error rate is acceptable there. That might reduce the amount of labor required to write code, though I don't think that it'll replace it.

    Maybe it's possible to look for common security errors to flag for a human by training a model to recognize those.

    I also think that software development is probably one of the more-heavily-automated fields out there because, well, people who write software make systems to do things over and over. High-level programming languages rather than writing assembly, software libraries, revision control...all that was written to automate away parts of tasks. I think that in general, a lot of the low-hanging fruit has been taken.

    Does that mean that I think that software cannot be written by AI? No. I am sure that AI can write software. But I don't think that the AI systems that we have today, or systems that are slightly tweaked, or systems that just have a larger model, or something along those lines, are going to be what takes over software development. I also think that the kind of hurdles that we'd need to clear to really fully write software from an AI require us to really get near an AI that can do anything that a human can do. I think that we will eventually get there, and when we get there, we'll see human labor in general be automated. But I don't think that OpenAI or Microsoft are a year away from that.

    System and network administration

    Again, I'm skeptical that interacting with computers is where LLMs are going to be the most-effective. Computers just aren't that tolerant of errors. Most of the things that I can think of that you could use an AI to do, like automated configuration management or something, already have some form of automated tools in that role.

    Also, I think that obtaining training data for this corpus is going to be a pain. That is, I don't think that sysadmins are going to generally be okay with you logging what they're doing to try to build a training corpus, because in many cases, there's potential for leaks of sensitive information.

    And a lot of data in that training corpus is not going to be very timeless. Like, watching someone troubleshoot a problem with a particular network card...I'm not sure how relevant that's going to be for later hardware.

    Quality Assurance

    This involves too many different things for me to make a guess. I think that there are maybe some tasks that some QA people do today that an LLM could do. Instead of using a fuzzer to throw input in for testing, maybe have an AI to predict what a human would do.

    Maybe it's possible to build some kind of model mapping instructions to operations with a mouse pointer on a screen and then do something that could take English-language instructions to try to generate actions on that screen.

    But I've also had QA people do one-off checks, or things that aren't done at mass scale, and those probably just aren't all that sensible to automate, AI or no. I've had them do tasks in the real world ("can you go open up the machine seeing failures and check what the label on that chip on the machine that's getting problems reads, because it's reporting the same part number in software"). I've written test plans for QA to run on things I've built, and had them say "this is ambiguous". My suspicion is that an LLM trained on what information is out there is going to have a hard time, without a deep understanding of a system, to be able to say "this is ambiguous".

    Overall

    There are other areas. But I think that any answer is probably "to some degree, depending upon what area of tech work, but mostly not, not with the kind of AI systems that exist today or with minor changes to existing systems".

    I think that a better question than "can this be done with AI" is "how difficult is this job to do with AI". I mean, I think that eventually, pretty much any job could probably be done by an AI. But I think that some are a lot harder than others. In general, the ones that are more-amenable are, I think, those where one can get a good training corpus -- a lot of recorded data showing how to do the task correctly and incorrectly. I think that, at least using current approaches, tasks that are somewhat-tolerant of errors are better. For any form of automation, AI or no, tasks that need to be done repeatedly many times over are more-amenable to automation. Using current approaches, problems that can be solved by combining multiple things from a training corpus in simple ways, without a deep understanding, not needing context about the surrounding world or such, are more amenable to being done by AI.

  • No, not even close. It's far too unreliable, without someone who knows what they're doing to vet the questionable result, AI is a disaster waiting to happen. Never mind it cannot go fix a computer or server or any physical issue.

    • Replacing workers with AI is a dream of management, but it's not really AI it's just a general search engine with a fairly impressive natural language interface.

  • I went to taco bell the other day and they had an AI taking orders in the drive thru, but it seemed like they had the same number of workers.

    They also weren't happy I tried to mess with the ai.

  • It has potential to increase quality but not take over the job. So coders already had various addons that can help complete a line and suggest variables and such. I found the auto commenting great. Not that it did a great job but its one of those things were without it im not doing enough commenting but when it auto comments Im inclined to correct it. I suppose at some point in the future the tech people could be writing better tasks and user stories and then commenting to have ai update the code output or just going in and correcting it. Maybe then comments would indicate ai code vs user intervened code or such. Utlimately though until it can plan the code its only going to be a useful tool and can't really take over. Ill tell ya if ai could write code from an initiative the csuite wrote then we are at the singularity.

    • It also has potential to decrease the quality.

      I think the main pivot point is whether it replaces human engineers or complements them.

      I’ve seen people with no software engineering experience or education, or even no programming experience at all in any form, create working apps with AI.

      I’ve also seen such code in multiple instances and have to wonder how any of it makes sense at all to anyone. There are no best practices seen, just a confusing set of barely working disconnected snippets of code that very rudimentarily work together to do what the creator wanted in a very approximate, inefficient and unpredictable way, while also lacking any benefits of such disconnect such as encapsulation or any real domain-separated design.

      Extending and maintaining that app? Absolutely not possible without either a massive refactoring resembling a complete rewrite, or, you know, just a honest rewrite.

      The problem is, someone who doesn’t know what they are doing, doesn’t know what to ask the language model to do. And the model is happy to just provide what is asked of it.

      Even when provided proper, informed prompts, the disability to use the entire codebase as the context causes a lot of manual intervention and requires bespoke design in the code base to work with that.

      It absolutely takes many more times more work to make it all work for ML in a proper, actually maintainable and workable way, and even then requires constant intervention, to the point that you end up doing the work you’d do manually, but in at least triple the amount of effort.

      It can enhance some aspects, of which one worth a special mention is actually the commenting and automatic, basic documentation skeletons to work up from, but it certainly will not, for some while, replace anyone. Not unless the app really only has to work, maybe, sometimes, and stay as-is without any augmentations, be they maintenance or extending or whatever.

      But yeah, it sort of makes sense. It’s a language model. Not a logical model or one that is capable of understanding given context, and being able to get even close to enough context, and maintain or even properly understand the architecture it works with.

      It can mimic code, as it is a language model after all. It can get the syntax right, sure, and sometimes, in small applications, it works well enough. It can be useful to those who would not employ engineers in the first place, and it can be amazing for those cases, really, good for them! But anything that requires understanding of anything? Yeah, that’s going to do nothing other than confuse and trip everyone in the long run, requiring multiples of work to do in comparison to just doing it with actual people who can actually understand shit and retain tens of years worth of accumulated extremely complex and large context and experience applying it in practice.

      But, again, for documentation, I think it is a perfect fit. It needs not any deeper context, and it can put into general language what it sees as code, and sometimes it even gets it right and requires minimal input from people.

      So, it can increase quality in some sense, but we have to be really conscious of what that sense is, and how limited its usefulness ultimately is.

      Maybe in due time, we’ll get there. But we are not even close to anything groundbreaking yet in this realm.

      I don’t think we’ll ever get there, because we are very likely going to overextend our usage of natural resources and burn out the planet before we get there. Unless a miracle happens, such as stable fusion energy or something as yet inconceivable.

  • I think quite the opposite AI is making each tech worker more efficient at the simple tasks that ai is capable of handling while leaving the complex high skill tasks to humans.

    I think that people see human output as a zero sum game and that if ai takes a job then a human must lose a job I disagree. Their are always more things to do more science more education more products more services more planets more stars more possibilities for us as a species.

    Horses got replaces by cars cos a horse can't invent more things to do with itself. A horse can't get into the road building industry or the drive through industry etc.

    • More science to do... made me think of portal. :)

    • There are so many more things to do. Nowadays, we’re just barely doing what really needs to be done. Pretty much everything else gets ignored.

      The horse analogy is actually pretty good. Back in the horsy days, you would not travel to the nearest city unless it was really important. You would rely on the products and services you had in your town. If something wasn’t available, tough luck. If it was super important, you might undertake the journey to the nearest city where you could buy that one thing.

      Nowadays though, you totally can drive 20 minutes to get stuff done. Even better than that, logistics don’t depend on horses any more, so you can have obscure stuff shipped to your home, no problem.

      This applies to all sorts of things too. Once AI is ready to take on more tasks… some really creepy and nasty stuff will probably happen, but it might almost be worth it. I think it should be possible to do many tasks that simply get ignored today.

      Like, who will pick up the trash today? Nobody. The trash guy will show up on Thursday, so deal with it. Who will organize the warehouse? Nobody. It’s not a complete disaster just yet. We can manage for the time being. We’ll fix it when production is about to stop because we can’t find stuff in the warehouse any more. Examples like this can be found everywhere.

    • There is definitely a market pressure not being fulfilled that I think does accommodate much more effective tech workers.

      At least in the spaces I frequent the cap isn't as much the volume of work you have to do, it's how much of it you can't get to because the people you do have run out of time.

      The real question is whether at the corporate level there will be a competitive pressure to keep the budget where it is and increase output versus cut down on available capacity and keep shipping what you're shipping. I genuinely don't know where that lands in the long term.

      If smaller startups are able to meet the output of shrunk-down massive corpos and start chipping away at them maybe it's fine and what we get is more output from the same people. If that's not the case and we keep the current per-segment monopoly/oligarchy... then maybe it's just a fast forward button on enshittification. I don't think anybody knows.

      But also, either way the improvements are probably way more incremental and less earth-shattering than either the shills/AIbros or the haters/doomers are implying, so...

  • I think we're still deeply into the "shove it everywhere we can" hype era of AI and it'll eventually die down a bit, as it with any new major technological leap. The same fears and thoughts were present when computers came along, then affordable home computers, and affordable Internet access.

    AI can be useful it used correctly but right now we're trying to put it everywhere for rather dubious gains. I've seen coworkers mess with AI until it generates the right code for much longer than it would take to hand write it.

    I've seen it being used quite successfully in the tech support field, because an AI is perfectly good at asking the customer if they've tried turning it off and then back on again, and make sure it's plugged in. People would hate it I'm sure on principle, but the amount of repetitive "the user can't figure out something super basic" is very common in tech support and would let them focus a lot of their time on actual problems. It's actually smarter than many T1 techs I've worked with, because at least the AI won't sent the Windows instructions to a Mac user and then accuse them of not wanting to try the troubleshooting steps (yes, I've actually seen that happen). But you'll still need humans for anything that's not a canned answer or known issue.

    One big problem is when the AI won't work can be somewhat unpredictable especially if you're not yourself fairly knowledgeable of how the AIs actually work. So something you think would normally take you say 4 hours and you expect done in 2 with AI might end up being an 8h task anyway. It's the eternal layoff/hires cycle in tech: oh we have React Native now, we can just have the web team do the mobile apps and fire the iOS and Android teams. And then they end up hiring another iOS and Android team because it's a pain in the ass to maintain and make work anyway and you still need the special knowledge.

    We're still quite some ways out from being able to remove the human pilot in front. It's easy to miss how much an experienced worker implicitly guides the AI the right direction. "Rewrite this with the XYZ algorithm" still needs the human worker to have experience with it and enough knowledge to know it's the better solution. Putting inexperienced people at the helm with AI works for a while but eventually it's gonna be a massive clusterfuck only the best will be able to undo. It's still just going to be a useful tool to have for a while.

  • I work for a web development agency. My coworkers create mobile apps, they start off with AI building the app skeleton, then they refine things manually.

    I work with PHP and some JavaScript and AI supports me optimizing my code.

    Right now AI is an automatization tool that helps developers save time for better code and it might reduce the size of development teams in the near future. But I don't see it yet, and I certainly don't see it replacing developers completely.

78 comments