What is the legitimate use-case for generative AI?
I promise this question is asked in good faith. I do not currently see the point of generative AI and I want to understand why there's hype. There are ethical concerns but we'll ignore ethics for the question.
In creative works like writing or art, it feels soulless and poor quality. In programming at best it's a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.
When I see AI ads directed towards individuals the selling point is convenience. But I would feel robbed of the human experience using AI in place of human interaction.
skin cancer diagnoses with llms has a high success rate with a low cost. This is something that was starting to exist with older ai models, but llms do improve the success rate. source
VLC recently unveiled a new feature of using ai to generate subtitles, i haven't used it but if it delivers then it's pretty nice
for code generation, I agree it's more harmful than useful for generating full programs or functions, but i find it quite useful as a predictive text generator, it saves a few keystrokes. Not a game changer but nice. It's also pretty useful at generating test data so long as it's hard to create but easy (for a human) to validate.
In creative works like writing or art, it feels soulless and poor quality. In programming at best it’s a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.
I'd actually challenge both of these. The property of "soulessness" is very subjective, and AI art has won blind competitions. On programming, it's empirically made faster by half again even with the intrinsic requirement for debugging.
It's good at generating things. There are some things we want to generate. Whether we actually should, like you said, is another issue, and one that doesn't impact anyone's bottom line directly.
i’ve written bots that filter things for me, or change something to machine-readable formats
the most successful thing i’ve done is have a bot that parses a web page and figures out the date/time in standard format, gets a location if it’s listed in the description and geocodes it, and a few other fields to make an ical for pretty much any page
i think the important thing is that gen ai is good at low risk tasks that reduce but don’t eliminate human effort - changing something from having to do a bunch of data entry to skimming for correctness
I need help getting started. I’m not an idea person. I can make anything you come up with but I can’t come up with the ideas on my own.
I’ve used it for an outline and then I rewrite it with my input.
Also, I used it to generate a basic UI for a project once. I struggle with the design part of programming so I generated a UI and then drew over the top of the images to make what I wanted.
I tried to use Figma but when you’re staring at a blank canvas it doesn’t feel any better.
I don’t think these things are worth the cost of AI ( ethically, financially, socially, environmentally, etc). Theoretically I could partner with someone who is good at that stuff or practice till I felt better about it.
I have a friend with numerous mental issues who texts long barely comprehensible messages to update me on how they are doing, like no paragraphs, stream of consciousness style... and so i take those walls of text and tell chat gpt to summarize it for me, and it goes from a mess of words into an update i can actually understand and respond to.
Another use for me is getting quick access to answered id previously have to spend way more time reading and filtering over multiple forums and stack exchanges posts to answer.
Basically they are good at parsing information and reformatting it in a way that works better for me.
I have a very good friend who is brilliant and has slogged away slowly shifting the sometimes-shitty politics of a swing state's drug and alcohol and youth corrections policies from within. She is amazing, but she has a reading disorder and is a bit neuroatypical. Social niceties and honest emails that don't piss her bosses or colleagues off are difficult for her. She jumped on ChatGPT to write her emails as soon is it was available, and has never looked back. It's been a complete game changer for her. She no longer spends hours every week trying to craft emails that strike that just-right balance. She uses that time to do her job, now.
Learning languages is a great use case. I'm learning Mandarin right now, and being able to chat with a bot is really great practice for me. Another use case I've found handy is using it as a sounding board. The output it produces can stimulate new ideas in my own head, and it makes it a good exploration tool that let me pull on different threads of thought.
I like using it to help get the ball rolling on stuff and organizing my thoughts. Then I do the finer tweaking on my own. Basically I kinda use a sliding scale of the longer it takes me to refine an AI output for smaller and smaller improvements is what determines when I switch to manual.
It is so much faster for me to give the ai the api/library documentation than it would be for me to figure out how that api works. Is it a perfect drop-in, finished piece of code? No. But that is not what I ask the ai for. I ask it for a simple example which I can then take, modify, and rework into my own code.
A generative ai with "error free" Output, is very differently useful than one that isn't.
Imagine an ai that would answer any questions objectively and unbiased, would that threaten job? Yeah. Would it be an huge improvement for human kind? Yeah.
Now imagine the same ai with a 10% bs rate, like how would you trust anything from it?
Currently generative ai is very very flawed. That is what we can evaluate and it is obvious. It is mostly useless as it produces mostly slop and consumes far more energy and water than you would expect.
A "better" one would be differently useful but just like killing half of the worlds population would help against climate change, the cost of getting there might not be what we want it to be, and it might not be worth it.
Current market practice, cost and results, lead me to say, it is effectively useless and probably a net negative for human kind. There is no legitimate usage as any usage legitimizes the market practice and cost given the results.
AI saves time. There are few use cases for which AI is qualitatively better, perhaps none at all, but there are a great many use cases for which it is much quicker and even at times more efficient.
I'm sure the efficiency argument is one that could be debated, but it makes sense to me in this way: for production-level outputs AI is rarely good enough, but creates really useful efficiency for rapid, imperfect prototyping. If you have 8 different UX ideas for your app which you'd like to test, then you could rapidly build prototype interfaces with AI. Likely once you've picked the best one you'll rewrite it from scratch to make sure it's robust, but without AI then building the other 7 would use up too many man-hours to make it worthwhile.
I'm sure others will put forward legitimate arguments about how AI will inevitably creep into production environments etc, but logistically then speed and efficiency are undeniably helpful use cases.
As some witty folks have put it, LLMs can't give you anything truly, interestingly new when all they're capable of is some weighted average of what's already there. And I'll be clear in saying I hate with the force of a tsunami the way AI is being shoved at us by desperate CEOs, and how it's being used to kill labor, destroy copyright law, increase income inequality, destroy the environment, and increase the power of huge corporations headed by assholes like Altman and Musk. But AI is getting pretty good at that weighted-average-of-what's-out-there, and a lot of the work done in several industries can benefit from that. For me, one of the great perversities or tragedies of AI is that it could be a targeted, useful tool but, instead, it's a hammer to further erode freedom. Even the coders, editors, advertisers, educators, etc. using it to do their jobs are participating in a short-term selloff of their profession to their CEOs, shareholders, etc. at the expense of large numbers of their colleagues or potential colleagues who will now never get jobs.
It's like if someone invented the wheel and Sam Altman immediately patented it and sold it to Raytheon.
In my case, YAML is a tool of Satan and Ansible is its 2001-era minion of stupid, so when I need to write Ansible I let the robots do that for me and save my sanity.
I understand that will make me less likely to ever learn Ansible, if I use a bot to write the 'code' for me; and I consider that to be another benefit as I don't need to develop a pot habit later, in the hopes of killing the brain cells that record my memory of learning Ansible.
I use it in a lot of tiny ways for photo-editing, Adobe has a lot of integration and 70% of it is junk right now but things like increasing sharpness, cleaning noise, and heal-brush are great with AI generation now.
I recently had to digitize dozens of photos from family scrapbooks, many of which had annoying novelty pattern borders cut out of the edges. Sure, I could have just cropped the photos more to hide the stupid zigzagged missing portions. But I had the beta version of Photoshop installed with the generative fill function, so I tried it. Half the time it was garbage, but the other half it filled in a bit of grass or sky convincingly enough that you couldn't tell the photo was damaged. +1 acceptable use case for generative AI, I guess.
Good for boilerplate code and variables naming when what you want is for the model to regurgitate things it has seen before.
Short pieces of code where it's much faster to verify that the code is correct than to write the code yourself.
Sometimes, I know how to do something but I'll wait for Copilot to give me a suggestion, and if it looks like what I had in mind, it gives me extra confidence in the correctness of my solution. If it looks different, then it's a sign that I might want to rethink it.
It sometimes gives me suggestions for APIs that I'm not familiar with, prompting me to look them up and learn something new (assuming they exist).
There's also some very cool applications to game AI that I've seen, but this is still in the research realm and much more niche.
I generate D&D characters and NPCs with it, but that's not really a strong argument.
For programming though it's quite handy. Basically a smarter code completion that takes the already written stuff into account. From machine code through assembly up to higher languages, I think it's a logical next step to be able to tell the computer, in human language, what you actually are trying to achieve. That doesn't mean it is taking over while the programmer switches off their brain of course, but it already saved me quite some time.
Absolutely this. I've found AI to be a great tool for nitty-gritty questions concerning some development framework. While googling/duckduckgo'ing, you need to match the documentation pretty specifically when asking about something specific. AI seems to be much better at "understanding" the content and is able to match with the documentation pretty reliably.
For example, I was reading docs up and down at ElasticSearch's website trying to find all possible values for the status field within an aggregated request. Google only lead me to general documentations without the specifics. However, a quick loosely worded question to chatGPT handed me the correct answer as well as a link to the exact spot in the docs where this was specified.
People keep meaning different things when they say "Generative AI". Do you mean the tech in general, or the corporate AI that companies overhype and try to sell to everyone?
The tech itself is pretty cool. GenAI is already being used for quick subtitling and translating any form of media quickly. Image AI is really good at upscaling low-res images and making them clearer by filling in the gaps. Chatbots are fallible but they're still really good for specific things like generating testing data or quickly helping you in basic tasks that might have you searching for 5 minutes. AI is huge in video games for upscaling tech like DLSS which can boost performance by running the game at a low resolution then upscaling it, the result is genuinely great. It's also used to de-noise raytracing and show cleaner reflections.
Also people are missing the point on why AI is being invested in so much. No, I don't think "AGI" is coming any time soon, but the reason they're sucking in so much money is because of what it could be in 5 years. Saying AI is a waste of effort is like saying 3D video games are a waste of time because they looked bad in 1995. It will improve.
AI is huge in video games for upscaling tech like DLSS which can boost performance by running the game at a low resolution then upscaling it, the result is genuinely great
frame gen is blurry af and eats shit on any fast motion. rendering games at 640x480 and then scaling them to sensible resolutions is horrible artistic practice.
rendering games at 640x480 and then scaling them to sensible resolutions is horrible artistic practice.
Is that a reason a lot of pixel art games are looking like shit? I remember the era of 320x240 and 640x480 and the modern pixel art are looking noticeably worse.
I sorta disagree though, based on my experience with llms.
The email it generates will need to be read carefully and probably edited to make sure it conveys your point accurately. Especially if it's related to something as serious as insurance.
If you already have to specifically create the prompt, then scrutinize and edit the output, you might as well have just written the damn email yourself.
It seems only useful to write slop that doesn't matter that only gets consumed by other machines and dutifully logged away in a slop container.
It does sort of solve the 'blank page problem' though IMO. It sometimes takes me ages to start something like a boring insurance letter because I open up LibreOffice and the blank page just makes me want to give up. If I have AI just fart out a letter and then I start to edit it, I'm already mid-project so it actually does save me some time in that way.
For us who are bad at writing though that's exactly why we use it. I'm bad with greetings, structure, things that people expect and I've had people get offended at my emails because they come off as rude. I don't notice those things. For that llms have been a godsend. Yes, I of course have to validate it, but it conveys the message I'm trying to usually
Yeah that's how I use it, essentially as an office intern. I get it to write cover letters and all the other mindless piddly crap I don't want to do so I can free up some time to do creative things or read a book or whatever. I think it has some legit utility in that regard.
I use LLMs for search results when conventional search engines aren't providing relevant results, and then I can fact-check whatever answers the LLMs give me. Especially using them to ask questions that are easy to verify, like mathematical questions where I can check the validity of the answers. Or similarly programming questions where I can read through the solution, check the documentation for any functions used, and make sure the output is logical, and make any tweaks if the LLM gives a nearly-correct answer. I always ask LLMs to cite their sources so I can check those too.
I also sometimes use LLMs for formatting, like when I copy text off a PDF and the spacing is all funky.
I don't use LLMs for this, but I imagine that they would be a better replacement for previous automated translation tools. Translation seems to be one of the most obvious applications since LLMs are just language pattern recognition at the end of the day. Obviously for anything important they need to be checked by a human, but they would e.g. allow for people to participate in online communities where they don't speak the community's language.
There is no point. There are billions of points, because there are billions of people, and that's the point.
You know that there are hundreds or thousands of reasonable uses of generative AI, whether it's customer support or template generation or brainstorming or the list goes on and on. Obviously you know that. So I'm not sure that you're asking a meaningful question. People are using a tool to solve various problems, but you don't see the point in that?
If your position is that they should use other tools to solve their problems, that's certainly a legitimate view and you could argue for it. But that's not what you wrote and I don't think that's what you feel.
I think genAI would be pretty neat for bit banging tests fuzzing, aka. Throwing semi-random requests and/or signals at some device in the hopes of finding obscure edge-cases or security holes.
I use it to re-tone and clarify corporate communications that I have to send out on a regular basis to my clients and internally. It has helped a lot with the amount of time I used to spend copy editing my own work. I have saved myself lots of hours doing something I don't really like (copy-editing) and more time doing the stuff I do (engineering) because of it.
There are some great use cases, for instance transcribing handwritten records and making them searchable is really exciting to me personally. They can also be a great tool if you learn to work with them (perhaps most importantly, know when not to use them - which in my line of work is most of the time).
That being said, none of these cases, or any of the cases in this thread, is going to return the large amounts of money now being invested in AI.
Generative AI is actually really bad at transcription. It imagines dialogues that never happened. There was some institution, a hospital I think? They said every transcription had at least one major error like that.
This is an issue if it's unsupervised, but the transcription models are good enough now that with oversight then they're usually useful: checking and correcting the AI generated transcription is almost always quicker than transcribing entirely by hand.
If we approach tasks like these assuming that they are error-prone regardless whether they are done by human or machine, and will always need some oversight and verification, then the AI tools can be very helpful in very non-miraculous ways. I think it was Jason Koebler said in a recent 404 podcast that at Vice he used to transcribe every word of every interview he did as a journalist, but now transcribes everything with AI and has saved hundreds of work hours doing so, but he still manually checks every transcript to verify it.
I'd say there are probably as many genuine use-cases for AI as there are people in denial that AI has genuine use-cases.
Top of my head:
Text editing. Write something (e.g. e-mails, websites, novels, even code) and have an LLM rewrite it to suit a specific tone and identify errors.
Creative art. You claim generative AI art is soulless and poor quality, to me, that indicates a lack of familiarity with what generative AI is capable of. There are tools to create entire songs from scratch, replace the voice of one artist with another, remove unwanted background noise from songs, improve the quality of old songs, separate/add vocal tracks to music, turn 2d models into 3d models, create images from text, convert simple images into complex images, fill in missing details from images, upscale and colourise images, separate foregrounds from backgrounds.
Note taking and summarisation (e.g. summarising meeting minutes or summarising a conversation or events that occur).
Video games. Imagine the replay value of a video game if every time you play there are different quests, maps, NPCs, unexpected twists, and different puzzles? The technology isn't developed enough for this at the moment, but I think this is something we will see in the coming years. Some games (Skyrim and Fallout 4 come to mind) have a mod that gives each NPC AI generated dialogue that takes into account the NPC's personality and history.
Real time assistance for a variety of tasks. Consider a call centre environment as one example, a model can be optimised to evaluate calls based on language and empathy and correctness of information. A model could be set up with a call centre's knowledge base that listens to the call and locates information based on a caller's enquiry and tells an agent where the information is located (or even suggests what to say, though this is currently prone to hallucination).
If you don’t know what you are doing and ask LLMs for code then you are gonna waste time debugging it without understanding but if you are just asking it for boiler plate stuff, or are asking it to add comments and print outs to console for existing code for debugging, it’s really great for that. Sometimes it needs chastising or corrections but so do humans.
I find it very useful but not worth the environmental cost or even the monetary cost. With how enshittified Google has become now though I find that ChatGPT has become a necessary evil to find reliable answers to simple queries.
There was a legitimate use case in art to draw on generative AI for concepts and a stopgap for smaller tasks that don't need to be perfect. While art is art, not every designer out there is putting work out for a gallery - sometimes it's just an ad for a burger.
However, as time has gone on for the industry to react I think that the business reality of generative AI currently puts it out of reach as a useful tool for artists. Profit hungry people in charge will always look to cut corners and will lack the nuance of context that a worker would have when deciding when or not to use AI in the work.
But you could provide this argument about any tool given how fucked up capitalism is. So I guess that my 2c - generative AI is a promising tool but capitalism prevents it from being truly useful anytime soon.
Looking up what actors from Mars Attacks had shared work on another movie. I recognized that Pierce Brosnan and John Doe Baker had done Goldeneye and wondered if there were more.
Name suggestions for a black and white cat - I got some funny suggestions like Oreo and a kick-ass suggestion for Domino
I wrote guidelines for my small business. Then I uploaded the file to chatgpt and asked it to review it.
It made legitimately good suggestions and rewrote the documents using better sounding English.
Because of chatgpt I will be introducing more wellness and development programs.
Additionally, I need med images for my website. So instead of using stock photos, I was able to use midjourney to generate a whole bunch of images in the same style that fit the theme of my business. It looks much better.
What doesn't exist yet, but is obviously possible, is automatic tweening. Human animators spend a lot of time drawing the drawings between other drawings. If they could just sketch out what's going on, about once per second, they could probably do a minute in an hour. This bullshit makes that feasible.
We have the technology to fill in crisp motion at whatever framerate the creator wants. If they're unhappy with the machine's guesswork, they can insert another frame somewhere in-between, and the robot will reroute to include that instead.
We have the technology to let someone ink and color one sketch in a scribbly animatic, and fill that in throughout a whole shot. And then possibly do it automatically for all labeled appearances of the same character throughout the project.
We have the technology to animate any art style you could demonstrate, as easily as ink-on-celluloid outlines or Phong-shaded CGI.
Please ignore the idiot money robots who are rendering eye-contact-mouth-open crowd scenes in mundane settings in order to sell you branded commodities.
I had not. There's a variety of demos for guessing what comes between frames, or what fills in between lines... because those are dead easy to train from. This technology will obviously be integrated into the process of animation, so anything predictable Just Works, and anything fucky is only as hard as it used to be.
E.g., I asked an LLM client for interactive lessons for teaching 4th graders about aerodynamics, esp related to how birds fly. It came back with 98% amazing suggestions that I had to modify only slightly.
A work colleague asked an LLM client for wedding vow ideas to break through writer's block. The vows they ended up using were 100% theirs, but the AI spit out something on paper to get them started.
Those are just ideas that were previously "generated" by humans though, that the LLM learned
That’s not how modern generative AI works. It isn’t sifting through its training dataset to find something that matches your query like some kind of search engine. It’s taking your prompt and passing it through its massive statistical model to come to a result that meets your demand.
I use it for parsing through legalese or terms and conditions. IT IS NOT PERFECT. I wouldn't trust it ever over a lawyer. But it's great for things like "Is there anything here that is extra unusual or weirdly anti-consumer or very bad for privacy?". I think it's great for that.
People here are just "it will take jobs it's inherently evil". They said the same about Photoshop, and computers before. I think there are evil uses for it sure, but that doesn't mean that it has no valid usages
For coding it works really well if you give it examples like "i have code that looked like this .... And i made it to look like this .... If i give you another piece of code that's similar to the first can you convert it to the second for me". Been great to reduce the amount of boring grunt work so I can focus on the more fun stuff
In C#, when programming save/load in video games, it can be super tedious. I am self taught and i didnt have the best resources, so the only way i could find to ensure its saving the correct variables was to manually input every single variable into a text file. I dont care if its plaintext, if people want to edit their save then more power to them. The issue is that there are potentially tens of hundreds of different variables that need to be saved for the gamestate to be accurately recreated.
So its really nice that i can just copy/paste my classes into gpt and give it the syntax for a single variable to be saved, then have it do the rest. I do have to browse through and ensure its actually getting all the variables, but it turns a potentially mindnumbing 4 hour long process into maybe a 20 minute one thats relatively engaging.
Also if you know a better way lmk. I read that you can simply hash the object into a text file and then unhash it, but afaik unhashing something is next to impossible and i could never figure it out anyways.
Or you can do something simple like scramble the letters like a cypher, still able to edit manually but it wouldn't be as readable and obvious what everything does.
Or you can can encode it, same issue as the last but they'll have to know what it was encoded with to decode it before editing.
Or you can just turn it into bytes so the file is more awkward to work with.
You could probably mix a bunch of these together if you care enough. U don't think any are THE standard and foolproof but they're options