It has been two years since the release of ChatGPT. How has it impacted your work or personal life? What changes have you experienced, and do you see it as a positive or negative influence
Other than endless posts from the general public telling us how amazing it is, peppered with decision makers using it to replace staff and then the subsequent news reports how it told us that we should eat rocks, or some variation thereof, there's been no impact whatsoever in my personal life.
In my professional life as an ICT person with over 40 years experience, it's helped me identify which people understand what it is and more specifically, what it isn't, intelligent, and respond accordingly.
As a software developer, the one usecase where it has been really useful for me is analyzing long and complex error logs and finding possible causes of the error. Getting it to write code sometimes works okay-ish, but more often than not it's pretty crap. I don't see any use for it in my personal life.
I think its influence is negative overall. Right now it might be useful for programming questions, but that's only the case because it's fed with Human-generated content from sites like Stackoverflow. Now those sites are slowly dying out due to people using ChatGPT and this will have the inverse effect that in the future, AI will have less useful training data which means it'll become less useful for future problems, while having effectively killed those useful sites in the process.
Looking outside of my work bubble, its effect on academia and learning seems pretty devastating. People can now cheat themselves towards a diploma with ease. We might face a significant erosion of knowledge and talent with the next generation of scientists.
The impact there has been overwhelmingly negative. Plagiarism is more common, student writing is worse, and I need to continually explain to people at an AI essay just isn’t their work.
Then there’s the way admin seem to be in love with it, since many of them are convinced that every student needs to use the LLMs in order to find a career after graduation. I also think some of the administrators I know have essentially automated their own jobs. Everything they write sounds like GPT.
As for my personal life, I don’t use AI for anything. It feels gross to give anything I’d use it for over to someone else’s computer.
It cost me my job (partially). My old boss swallowed the AI pill hard and wanted everything we did to go through GPT. It was ridiculous and made it so things that would normally take me 30 seconds now took 5-10 minutes of "prompt engineering". I went along with it for a while but after a few weeks I gave up and stopped using it. When boss asked why I told her it was a waste of time and disingenuous to our customers to have GPT sanitize everything. I continued to refuse to use it (it was optional) and my work never suffered. In fact some of our customers specifically started going through me because they couldn't stand dealing with the obvious AI slop my manager was shoveling down their throat. This pissed off my manager hard core but she couldn't really say anything without admitting she may be wrong about GPT, so she just ostracized me and then fired me a few months later for "attitude problems".
I absolutely hate AI. I'm a teacher and it's been awful to see how AI has destroyed student learning. 99% of the class uses ChatGPT to cheat on homework. Some kids are subtle about it, others are extremely blatant about it. Most people don't bother to think critically about the answers the AI gives and just assume it's 100% correct. Even if sometimes the answer is technically correct, there is often a much simpler answer or explanation, so then I have to spend extra time un-teaching the dumb AI way.
People seem to think there's an "easy" way to learn with AI, that you don't have to put in the time and practice to learn stuff. News flash! You can't outsource creating neural pathways in your brain to some service. It's like expecting to get buff by asking your friend to lift weights for you. Not gonna happen.
Unsurprisingly, the kids who use ChatGPT the most are the ones failing my class, since I don't allow any electronic devices during exams.
My last job was making training/reference manuals. Management started pushing ChatGPT as a way to increase our productivity and forced us all to incorporate AI tools. I immediately began to notice my coworkers' work decline in quality with all sorts of bizarre phrasings and instructions that were outright wrong. They weren't even checking the shit before sending it out. Part of my job was to review and critique their work and I started having to send way more back than before. I tried it out but found that it took more time to fix all of its mistakes than just write it myself so I continued to work with my brain instead. The only thing I used AI for was when I had to make videos with narration. I have a bad stutter that made voiceover hard so elevenlabs voices ended up narrating my last few videos before I quit.
Never explored it at all until recently, I told it to generate a small country tavern full of NPCs for 1st edition AD&D. It responded with a picturesque description of the tavern and 8 or 9 NPCs, a few of whom had interrelated backgrounds and little plots going on between them. This is exactly the kind of time-consuming prep that always stresses me out as DM before a game night. Then I told it to describe what happens when a raging ogre bursts in through the door. Keeping the tavern context, it told a short but detailed story of basically one round of activity following the ogre's entrance, with the previously described characters reacting in their own ways.
I think that was all it let me do without a paid account, but I was impressed enough to save this content for a future game session and will be using it again to come up with similar content when I'm short on time.
My daughter, who works for a nonprofit, says she uses ChatGPT frequently to help write grant requests. In her prompts she even tells it to ask her questions about any details it needs to know, and she says it does, and incorporates the new info to generate its output. She thinks it's a super valuable tool.
I used it once to write a proclamation for work and what it spit out was mediocre. I ended up having to rewrite most of it. Now that I'm aware of how many resources AI uses, I refuse to use it, period. What it produces is in no way a good trade for what it costs.
I have a gloriously reduced monthly subscription footprint and application footprint because of all the motherfuckers that tied ChatGPT or other AI into their garbage and updated their terms to say they were going to scan my private data with AI.
And, even if they pull it, I don't think I'll ever go back. No more cloud drives, no more 'apps'. Webpages and local files on a file share I own and host.
it works okay as a fuzzy search over documentation.
...as long as you're willing to wait.
...and the documentation is freely available.
...and doesn't contain any sensitive information.
...and you very specifically ask it for page references and ignore everything else it says.
so basically, it's worse than just searching for one word and pressing "next" over and over, unless you don't know what the word is.
I worked for a company that did not govern AI use. It was used for a year before they were bought.
I stopped reading emails because they were absolute AI generated garbage.
Clients started to complain and one even left because they felt they were no longer a priority for the company. they were our 5th largest client that had a MRR of $300k+
they still did nothing to curb AI use.
they then reduced the workforce in the call center because they implemented an AI chat bot and began to funnel all incidents through it first before giving a phone number to call.
company was then acquired a year ago. new administration banned all AI usage under security and compliance guidelines.
today, new company hired about 20 new call center support staff. Customers are now happy. I can read my emails again because they contain human competent thought with industry jargon and not some generated thesaurus.
overall, I would say banning AI was the right choice.
IMO, AI is not being used in the most effective ways and causes too much chaos. cryptobros are pushing AI to an early grave because all they want is a cash cow to replace crypto.
I manage a software engineering group for an aerospace company, so early on I had to have a discussion with the team about acceptable and non-acceptable uses of an LLM. A lot of what we do is human rated (human lives depend on it), so we have to be careful. Also, it's a hard no on putting anything controlled or proprietary in a public LLM (the company now has one in-house).
You can't put trust into an LLM because they get things wrong. Anything that comes out of one has to be fully reviewed and understood. They can be useful for suggesting test cases or coming up with wording for things. I've had employees use it to come up with an algorithm or find an error, but I think it's risky to have one generate large pieces of code.
I work in an office providing customer support for a small pet food manufacturer. I assist customers over the phone, email, and a live chat function on our website. So many people assume I'm AI in chat, which makes sense. A surprising number think I'm a bot when they call in, because I guess my voice sounds like a recording.
Most of the time it's just a funny moment at the start of our interaction, but especially in chat, people can be downright nasty. I can't believe the abuse people hurl out when they assume it's not an actual human on the other end. When I reply in a way that is polite, but makes it clear a person is interacting with them, I have never gotten a response back.
It's not a huge deal, but it still sucks to read the nasty shit people say. I can also understand people's exhaustion with being forced to deal with robots from my own experiences when I've needed support as a customer. I also get feedback every day from people thankful to be able to call or write in and get an actual person listening to and helping them. If we want to continue having services like this, we need to make sure we're treating the people offering them decently so they want to continue offering that to us.
I cannot come up with a use-case for ChatGPT in my personal life, so no impact there.
For work it was a game-changer. No longer do I need to come up with haiku's to announce it is release-freeze day, I just let ChatGPT crap one out so we can all have a laugh at its lack of poetic talent.
I've tried it now and then for some programming related questions, but I found its solutions dubious at best.
For my life, it's nothing more than parlor tricks. I like looking at the AI images or whipping one up for a joke in the chat, but of all the uses I've seen, not one of them has been "everyday useful" to me.
ChatGPT has had absolutely zero impact on my work or personal life. I do not have any useful case for it whatsoever. I have used it for goofs before. That's about it. I cannot see it as a positive or negative influence...as it has had zero influence. I do get annoyed that every company and their mother is peddling worthless AI shit that most people have no use case for.
It's erased several tech jobs and replaced some helpforum commentors with bots to pretend their communities are alive and when you read their comments or 'suggestions' you can clearly tell, this isn't someone trying to help it's just a bot posting garbage pretending to help
i've used it fairly consistently for the last year or so. i didn't actually start using it until chatgpt 4 and when openai offered the $20 membership
i think AI is a tool. like any other tool, your results vary depending on how you use it
i think it's really useful for specific intents
example, as a fancy search engine. yesterday I was watching Annie from 1999 with my girlfriend and I was curious about the capitalist character. i asked chatgpt the following question
in the 1999 hit movie annie, who was the billionaire mr warbucks supposed to represent? were there actually any billionaires in the time period? it's based around the early 1930s
it gave me context. it showed examples of the types of capitalist the character was based on. and it informed me that the first billionaire was in 1916.
very useful for this type of inquiry.
other things i like using it for are to help coding. but there's a huge caveat here. some thing it's very helpful for... and some things it's abysmal for.
for example i can't ask it "can you help me write a nice animation for a react native component used reanimated"
because the response will be awful and won't work. and you could go back and forth with it forever and it won't make a difference. the reason is it's trained on a lot of stuff that's outdated so it'll keep giving you code that maybe would have worked 4 years ago. and even then, it can't hold too much context so complex applications just won't work
BUT certain things it's really good. for example I need to write a script for work. i use fish shell but sometimes i don't know the proper syntax or everything fish is capable of
so I ask
how to test, using fish, if an "images.zip" file exists in $target_dir
it'll pump out
if test -f "$target_dir/images.zip"
echo "File exists."
else
echo "File does not exist."
end
which gives me what i needed in order to place it into the script i was writing.
or for example if you want to convert a bash script to a fish script (or vice versa), it'll do a great job
so tldr:
it's a tool. it's how you use it. i've used it a lot. i find great value in it. but you must be realistic about its limitations. it's not as great as people say- it's a fancy search engine. it's also not as bad as people say.
as for whether it's good or bad for society, i think good. or at least will be good eventually. was the search engine a bad thing for society? i think being able to look up stuff whenever you want is a good thing. of course you could make the argument kids don't go to libraries anymore.. and maybe that's sorta bad. but i think the trade-off is definitely worth it
I got into linux right around when it was first happening, and I dont think I would've made it through my own noob phase if i didnt have a friendly robot to explain to me all the stupid mistakes I was making while re-training my brain to think in linux.
probably a very friendly expert or mentor or even just a regular established linux user could've done a better job, the ai had me do weird things semi-often. but i didnt have anyone in my life that liked linux, let alone had time to be my personal mentor in it, so the ai was a decent solution for me
It's made my professional life way worse because it was seen as an indication that the every hack-a-thon attempt to put a stupid chat bot in everything is great, actually.
It is getting more present at work every day, I keep having to hear even seniors how they "discussed" something with chatgpt or how they will ask it for help. Had to resolve some issue with devops a while back and they just kept pasting errors into chatgpt and trying out whatever it spewed back, which I guess wasn't that much different from me googling the same issue and spewing back whatever SO said.
I tried it myself and while it is neat for some simple repetitive things, I always end up with normal google searches or clicking on the sources because the problems I usually have to google for are also complicated problems that I need the whole original discussion and context too, not just a summary that might skip important caveats.
I dunno, I simultaneously feel old and out of touch, angry at myself for not just going with the flow and buying into it, but also disappointed in other people that rely on it without ever understanding that it's so flawed, unreliable and untrustworthy, and making people into worse programmers.
GitHub Copilot became my daily helper at work. While I'm not 100% satisfied with its code quality, I must admit it's very handy at writing boilerplate code. A few days ago, I had to write code without having internet access, and it was so disappointing to write boilerplate code by hand. It's an easy task, but it's time-consuming and unpleasant.
I have a guy at work that keeps inserting obvious AI slop into my life and asking me to take it seriously. Usually it’s a meeting agenda that’s packed full of corpo-speak and doesn’t even make sense.
I’m a software dev and copilot is sorta ok sometimes, but also calls my code a hack every time I start a comment and that hurts my feelings.
I'm a coding hobbyist, it's been very helpful in analyzing bugs, giving quick info about syntax and converting formatting for long sections where manually typing would be time intensive.
Point taken by someone else here saying continued use of AI may mean decreased functionally for stack exchange et al. That said, the advantage of AI is that it's answering your question specifically, instead of spending time sifting through semi related answers.
Outside of code it's good at aping the form of various genres. So if I need to answer an RFP question in a sales proposal, I might feed it the prompt to get a starting point. It always needs editing since it doesn't know the details of our business and because it's writing style is bland, but it's helpful to get a first draft.
After 2 years it's quite clear that LLMs still don't have any killer feature. The industry marketing was already talking about skyrocketing productivity, but in reality very few jobs have changed in any noticeable way, and LLM are mostly used for boring or bureaucratic tasks, which usually makes them even more boring or useless.
Personally I have subscribed to kagi Ultimate which gives access to an assistant based on various LLMs, and I use it to generate snippets of code that I use for doing labs (training) - like AWS policies, or to build commands based on CLI flags, small things like that. For code it gets it wrong very quickly and anyway I find it much harder to re-read and unpack verbose code generated by others compared to simply writing my own.
I don't use it for anything that has to do communication, I find it unnecessary and disrespectful, since it's quite clear when the output is from a LLM.
For these reasons, I generally think it's a potentially useful nice-to-have tool, nothing revolutionary at all. Considering the environmental harm it causes, I am really skeptical the value is worth the damage.
I am categorically against those people in my company who want to introduce "AI" (currently banned) for anything other than documentation lookup and similar tasks. In particular, I really don't understand how obtuse people can be thinking that email and presentations are good use cases for LLMs. The last thing we need is to have useless communication longer and LLMs on both sides that produce or summarize bullshit. I can totally see though that some people can more easily envision shortcutting bullshit processes via LLMs than simply changing or removing them.
I use it as a glorified google search for excel formulas and excel troubleshooting. That's about it. ChatGPT is the most overhyped bullshit ever. My company made a huge push to implement it into fucking everything and then seemingly abandoned it when the hype died down.
Man, so much to unpack here. It has me worried for a lot of the reasons mentioned: The people who pay money to skilled labor will think "The subscription machine can just do it." And that sucks.
I'm a digital artist as well, and while I think genAi is a neat toy to play with for shitposting or just "seeing what this dumb thing might look like" or generating "people that don't exist" and it's impressive tech, I'm not gonna give it ANY creative leverage over my work. Period. I still take issue with where it came from and how it was trained and the impact it has on our culture and planet.
We're already seeing the results of that slop pile generated from everyone who thought they could "achieve their creative dreams" by prompting a genie-product for it instead of learning an actual skill.
As for actual usefulness? Sometimes I run a local model for funsies and just bounce ideas off of it. It's like a parrot combined with a "programmer's rubber ducky." Sometimes that gets my mind moving, in the same way "autocomplete over and over" might generate interesting thoughts.
I also will say it's pretty decent at summarizing things. I actually find it somewhat helpful when YouTube's little "ai summary" is like "This video is about using this approach taking these steps to achieve whatever."
When the video description itself is just like "Join my Patreon and here's my 50+ affiliate links for blinky lights and microphones" lol
I use it to explain concepts to me in a slightly different way, or to summarize something for which there's a wealth of existing information.
But I really wish people were more educated about how it actually works, and there's just no way I'm trusting the centralized "services" for doing so.
It seemingly has little impact. I've attempted to use LLMs a couple of times to ask very specific technical questions (on this specific model, running this specific OS version, how do I do this very specific thing) to try and cut down on the amount of research I would have to do to find a solution. The answer every time has been wrong. Once it was close enough to the answer I was able to figure it out but "close enough" doesn't seem worth bothering with most of the time.
When I search for things I always slip the AI summary at the top of the page.
I used it once to write a polite "fuck off" letter to an annoying customer, and tried to see how it would revise a short story. The first one was fine, but using it with a story just made it bland, and simplified a lot of the vocabulary. I could see people using it as a starting point, but I can't imagine people just using whatever it spots out.
Someone suggested using it to identify things you only remember bits of or certain scenes from. I tried using it to find this YA book I read as a kid; it was not at all helpful, but did eventually lead me to do researching and finding the book elsewhere. (And it turns out the scene I was describing was exactly what happened, and the characters were named exactly what I thought they were, so that was born annoying at the time and frustrating later.)
I also tried using it to find this really obscure, incredibly bad 1970s tv movie that I had vague recollections of. Again, the scene was pretty much what I remembered, it couldn't identify it, but I eventually found a site that lists the plots of old tv movies and I read through like 30 pages of movie synopses until I found the one I was looking for.
I've also tried using it to find this 1980's interactive fiction game, but it's proved useless once again - and once again further research has identified a couple possibilities except I haven't had time to try to find the game and set up the right environment for it.
So my experience has been that it's useless in finding the things I want it to find, but that in trying to persist against it may lead me to find what I'm looking for elsewhere.
Not much impact personally. I just read all the terrible implications of it online. Pressure in the professional world to use it, though fuck if I know what to use it for in this job. I don't like using it for my writing because I don't want to rely on something like that and because it's prone to errors.
Wish something that used a ton of resources would actually have a great impact to make it worth the waste.
It's changed my job: I now have to develop stupid AI products.
It has changed my life: I now have to listen to stupid AI bros.
My outlook: it's for the worst; if the LLM suppliers can make good on the promises they make to their business customers, we're fucked. And if they can't then this was all a huge waste of time and energy.
Alternative outlook: if this was a tool given to the people to help their lives, then that'd be cool and even forgive some of the terrible parts of how the models were trained. But that's not how it's happening.
It's my rubber duck/judgement free space for Homelab solutions. Have a problem: chatgpt and Google it's suggestions. Find a random command line: chatgpt what does this do.
I understand that I don't understand it. So I sanity check everything going in and coming out of it. Every detail is a place holder for security. Mostly, it's just a space to find out why my solutions don't work, find out what solutions might work, and as a final check before implementation.
I am going to say that so far it hasn't done that much for me. I did originally ask it some silly questions, but I think I will be asking it for questions about coding soon.
Some of my coworkers show me their chatGPT generated drivel. They seem to be downright proud of that, like they would be gaming the system by using chatGPT instead of using their own head. However I think their daily work seems to consist of unnecessary corpo crap and they should really be fired and replaced with chatGPT.
Been using Copilot instead of CharGPT but I'm sure it's mostly the same.
It adds comments and suggestions in PRs that are mostly useful and correct, I don't think it's found any actual bugs in PRs though.
I used it to create one or two functions in golang, since I didn't want to learn it's syntax.
The most use Ive gotten out of it is to replace using Google or Bing to search. It's especially good at finding more obscure things in documentation that are hard to Google for.
I've also started to use it personally for the same thing. Recently been wanting to startup the witcher 3 and remembered that there was something missable right at the beginning. Google results were returning videos that I didn't want to watch and lists of missable quests that I didn't want to parse through. Copilot gave me the answer without issue.
Perhaps what's why Google and Ms are so excited about AI, it fixes their shitty search results.
My broken brain thinks up of a lot of dumb questions about science, history, and other topics. I use it all the time to answer those. Especially if it's a question that's a nuisance to lookup on Wikipedia (though I still love Wikipedia). I like ChatGPT because of the interactive nature of it. And I often have dumb follow-up questions for it.
It has also been a huge help when I get stuck of a coding or scripting task. Both at work and at home.
I've had it improve grammar on some legal documents I had to submit and also generate a safety plan for a specific job I was working on. It did both of those things ok but I still had to edit and delete sections that weren't relevant
Work wise no impact so far but I use it to write any bullshit corpo speak emails , tidy up CVs and for things like game cheats etc. Its banned now in my job and we have to use copilot but I dont cause it will send everything back to the company so if I need it I just use chatgpt it on my personal one and email it to my work one.
Not much. Every single time I asked it for help, it or gave me a recursive answer (ex: If I ask "how do I change this setting?" It answers: by changing this setting), or gave me a wrong answer. If I can't already find it on a search engine, then it's pretty useless to me.
I love using it for writing scripts that need to sanitize data. One example I had a bash script that looped through a csv containing domain names and ran AXFR lookups to grab the DNS records and dump into a text file.
These were domains on a Windows server that was being retired. The python script I had Copilot write was to clean up the output and make the new zone files ready for import into PowerDNS. Made sure the SOA and all that junk was set. Pdns would import the new zone files into a SQL backend.
Sure I could've written it myself but I'm not a python developer. It took about 10 minutes of prompting, checking the code, re-prompting then testing. Saved me a couple hours of work easy.
I use it all the time to output simple automation tasks when something like Ansible isn't apropos
It gave me a starting point for a terms of reference document for a Green Champions group that I set up at work. That is the only beneficial thing that I can recall.
I have tried to find other uses, but so far nothing else has actually proven up to scratch. I expect that I could have spent more time composing and tweaking prompts and proofreading the output, but it takes as long as writing the damned documents myself.
The most impact it has is in my work life. I do design reviews and suddenly AI/ML projects became priorities and stuff has to be ready for the next customer showcase, which is tomorrow. One thing I remember from a conference I attended was an AI talk where the presenter said something along the lines of: If you think devs are bad with testing code in production, wait till you meet data scientists who want to test using live data.
I'm a software person, llm tools for programming have been frankly remarkable. In my cleanest codebases copilot (using gpt4) autocompletes my intention correctly about 70% of the time today, reducing the amount of code I physically type by a huge margin. The accuracy shifts over time and it's dramatically less helpful for repositories that aren't pristine and full of well named functions and variables
Beyond that chatgpt has been a godsend sifting through the internet for the information I need, the new web feature is just outstanding since it actually gives sources
Chatgpt has also helped with writers block a ton, getting beyond plot points in my novel I was having a hard time with
It's been great with recipes, no more wading through fake life stories and ads
It's been helpful for complex questions about new topics I'm an amateur on, I've learned so much about neurology and the process of how neurons interact almost exclusively through the platform, fact checking takes a little time but so far it's been almost perfectly accurate on higher level objective questions
It's been helpful as a starting place for legal questions, the law is complex and having a starting place before consulting the lawyers has been really nice so I know what to ask
The only thing I have to worry about is not to waste my time to respond to LLM trolls in lemmy comments. People admitting to use LLM to me in conversation instantly lose my respect and I consider them lazy dumbfucks :p
I have only used it a few times, but it was amazing for my need. I work in IT so I'm not the best with writing. I enjoy working on projects and configuring new technology, servers, and applications for the company. What i don't enjoy is figuring out how to write communication emails to the company about what we're doing. So everytme I needed a write up informing people of what's happening and it's benefits, I used it to quickly write up something. Was it perfect? No, I had to edit some stuff of course. What it did do is create the entire structure and everything that needed to be said in the style of some corporate HR email. It would take me hours to type out something like this so for this to do it all in 2 minutes and me taking 5 minutes to look it over was amazing! Outside of this I haven't really used it much.
Super useful when I have a half-baked idea or concept that I want to learn more about, but don't know the lingo. I can explain the idea and it'll give me terms to search.
Also, it gives pretty good ideas for debugging or potential fixes.
Not sure i'd ever "trust with my life", but it's a useful tool if you use it right.
Generally, GitHub Copilot helps me type faster. Though sometimes it predicts something I'm don't expect and I have to slow down and analyze it to see if it seems to know something I don't. A small percentage of these cases are actually useful but the rest is usually noise. It's generally useful as long as you don't blindly trust it.
It's useful when you want to write some algorithm using specific versions of libraries. It first craps out wrong functions but after 1 or 2 redirects it usually shoots something that I then adapt to my use-case. I usually try googling it first but when most fucking guides use the new way of coding and I'm forced to use fixed versions due to company regulations, it gets frustrating to check if every function of known algorithms is available in the version I'm using and if it's not, which replacement would be appropriate.
It might hallucinate from time to time but it usually gives me good enough ideas/alternatives for me to be able to work around it.
I also use it to format emails and obscure hardware debugging. It's pretty bad but pretty bad is better than again, 99% of google results suggesting the same thing. GPT suggests you a different thing once you tell it you tried the first one.
As always, it's a tool and knowing that the answers aren't 100% accurate and you need to cross-check them is enough to make it useful.
I get an email from corporate about once a week that mentions it in some way. It gets mentioned in just about every all hands meeting. I don’t ever use it. No one on my team uses it. It’s very clearly not something that’s going to benefit me or my peers in the current iteration, but damn… it’s clear as day that upper management wants to use it but they don’t know how to implement it.
A game changer in helping me find out more about topics that have wisdom buried in threads of forum posts. Great to figure out things I have only fuzzy ideas or vague keywords that might be inaccurate. Great at explaining things that I can follow up on questions about details. Great at finding equations I need but I do not trust it one bit to do the calculations for me. Latest gen also gives me sources on request so I can double check and learn more directly from the horse's mouth.
It's a neat tool for very specific language-related tasks.
For example, it can translate a poem so that the translation still rhymes.
Its main strength is not its ability to write, but to read. It's the first time in human history where you can pose any question to a computer in human language, and expect to get a meaningful reply.
As long as that question isn't asking for facts or knowledge.
It's also useful for "tip of my tongue" queries, where the right Google search term is exactly what you're missing.
All of its output is only usable and useful if you already know the facts about what you're asking, and can double-check for hallucinations yourself.
However, on a societal scale, it's a catastrophy on par with nuclear war.
It will consume arbitrary amounts of energy, right at the most crucial time when combatting climate change might still have been possible.
And it floods everyone's minds with disinfo, while we're at the edge of a global resurgance of fascism.
Only small use cases on my end:
Professional - great at helping me save time on syntax related things (“help me right an excel formula that validates cell C2 as a properly formatted US phone number”).
Personal - really helpful at fleshing out a comedy idea I’m toying with (“help me analyze and expand why the idea of ‘vampires benefitting from an app called Is There Garlic In This’ is funny for a stand-up routine”).
Otherwise, I spend just as much time verifying the LLM’s output as I would have just doing it myself.
I used it the other day to redact names from a spreadsheet. It got 90% of them, saving me about 90 minutes of work. It has helped clean up anomalies in databases (typos, inconsistencies in standardized data sets, capitalization errors, etc). It also helped me spruce up our RFP templates by adding definitions for standard terminology in our industry (which I revised where needed, but it helped to have a foundation to build from).
As mentioned in a different post, I use it for DND storylines, poems, silly work jokes and prompts to help make up bed time stories.
My wife uses it to help proofread her papers and make recommendations on how to improve them.
I use it more often now than google search. If it’s a topic important enough that I want to verify, then I’ll do a deeper dive into articles or Wikipedia, which is exactly what I did before AI.
So yea, it’s like the personal assistant that I otherwise didn't have.
For everyone saying it's gross to use AI for personal messages. It's 100% valuable for people with motivational challenges and people with social challenges to draft or edit responses. You can ask it to evaluate the tone and subtext, or even rephrase thing to have a more appropriate tone and subtext. Like you can type a note and ask " is this appropriate to share with a classmate?" It can help you communicate what you want to say.
In terms or presentation it's a life saver in for getting started. Make me an outline for a 15 minute presentation about a topic to an audience of freshman college students.
It's a god tier level up when used .
Here is one in action.
"condense this to 180 characters
ChatGPT said:
AI helps with communication by drafting messages, adjusting tone, and offering presentation outlines. It’s a game-changer for those facing social or motivational challenges!"
"make the original more professional
ChatGPT said:
Certainly! Here's a more professional version of your original post:
For those critical of using AI for personal communication, it's important to recognize its potential in aiding individuals with social or motivational challenges. AI can assist in drafting messages, refining tone, and ensuring appropriate subtext. For example, it can help assess whether a message is suitable for a particular recipient or rephrase it to achieve the desired tone. Additionally, AI can support in crafting structured outlines for presentations, making it an invaluable tool for clear and effective communication. When used effectively, AI can be a powerful resource for enhancing communication skills."
"Got it! Here's a more overt version that directly ties AI to the idea of challenging the ruling class:
For those critical of using AI for personal communication, it's important to recognize its potential in supporting individuals who aim to challenge the ruling class or dominant power structures. AI can assist in drafting messages, refining tone, and ensuring that communication aligns with these broader goals. For instance, it can help assess whether a message is suitable for a specific audience or rephrase it to be more impactful in advocating for social change. Additionally, AI can aid in creating structured outlines for presentations that challenge conventional narratives. Used strategically, AI becomes a powerful tool for both effective communication and political resistance.
"
It had a good impact for me, it saved me from an immense headache of university. I explicitly told the professors that, I have issues with grammar (despite it being my native language).
They kept freaking out about it and I eventually resorted to ChatGPT. Solved the issue immediately.
I took an export of all our apps reviews and used it to summarise user pain points. Immediately a list of things we can prioritise.
When I'm doing repetitive code. It will (90% of the time) place the next puzzle piece in the repetition.
Using better systems like Cursor, I was able to create a twitch bot. I could then use it to make various text based games such as 20 questions or trivia. All (90% again, nothing is perfect) of which was done through prompts.
ChatGPT itself didn't do anything, FastGPT from Kagi helps me everyday though, for quickly summarizing sources to learn new things (eg. I search for a topic and then essentially just click the cited sources).
And ollama + open-webui + stable-diffusion-webui with a customized llama3.1-8b-uncensored is a great chat partner for very horny stuff.
I’ve implemented two features at work using their api. Aside from some trial-and-error prompt “engineering” and extra safeguards around checking the output, it’s been similar to any other api. It’s good at solving the types of problems we use it for (categorization and converting plain text into a screen reader compliant (WCAG 2.1) document). Our ambitions were greater initially, but after many failures we’ve settled on these use cases and the C-Suite couldn’t be happier about the way it’s working.
I genuinely appreciate being able to word my questions differently than old google, and specifying deeper into my doubts than just a key word search.
It’s great to delve into unknown topics with, then to research results and verify. I’ve been trying to get an intuitive understanding of cooking ingredients and their interaction with eachother and how that relates to the body, ayurvedically.
I think it’s a great way to self-educate, personally.
I have a book that I'm never going to write, but I'm still making notes and attempting to organize them into a wiki.
using almost natural conversation, i can explain a topic to the gpt, make it ask me questions to get me to write more, then have it summarize everything back to me in a format suitable for the wiki. In longer conversations, it will also point out possible connections between unrelated topics. It does get things wrong sometimes though, such as forgetting what faction a character belongs to.
I've noticed that gpt 4o is better for exploring new topics as it has more creative freedom, and gpt o1 is better for combining multiple fragmented summaries as it usually doesn't make shit up.
I love it. For work I use it for those quick references. In machining, hydraulics, electrical etc. Even better for home, need a fast recipe for dinner or cooking, fuck reading a god damn autobiography to get to the recipie. Chatgpt straight to the point. Even better, I get to read my kid a new bed time story every night and that story I tailored to what we want. Unicorns, pirates, dragons what ever.
It has replaced Google for me. Or rather, first I use the LLM (Mistral Large or Claude) and then I use Google or specific documentation as a complement. I use LLMs for scripting (it almost always gets it right) and programming assistance (it's awesome when working with a language you're not comfortable with, or when writing boilerplate).
It's just a really powerful tool that is getting more powerful every other week. The ones who differs simply hasn't tried enough, are superhumans or (more likely) need to get out of their comfort zone.
Bit sad reading these comments. My life has measurably improved ever since I jumped on using AI.
At first I just used it Copilot for helping me with my code. I like using a pretty archaic language and it kept trying to fed me C++ code. Had to link it the online reference and it surprisingly was able to adapt each time. Still gave a few errors here and there but good time saver and "someone" to "discuss" with.
Over time it has become super good, especially with the VScode extension that autofills code. Instead of having to ask help from one of the couple hundred people experienced with the language, I can just ask Copilot if I can do X or Y, or for general advice when planning out how to implement something. Legitimately a great and powerful tool, so it shocks me that some people don't use it for programming (but I am pretty bad at coding too, so).
I've also bit the bullet and used it for college work. At first it was just asking Gemini for refreshers on what X philosophical concept was, but it devolved into just asking for answers because that class was such a snooze I could not tolerate continuing to pay attention (and I went into this thinking I'd love the class!). Then I used it for my Geology class because I could not be assed to devote my time to that gen ed requirement. I can't bring myself to read about rocks and tectonic plates when I could just paste the question into Google and I get the right answer in seconds. At first I would meticulously check for sources to prevent mistakes from the AI buuuut I don't really need 100%... 85% is good enough and saves so much more time.
A me 5 years younger would be disgusted at cheating but I'm paying thousands and thousands to pass these dumb roadblocks. I just want to learn about computers, man.
Now I'd never use AI for writing my essays because I do enjoy writing them (investigating and drawing your own conclusions is fun!), but this economics class is making it so tempting. The shit that I give about economics is so infinitesimally small.
It has completely changed my life. With its help I am preparing to submit several research papers for publication for the first time in my life. On top of that, I find it an excellent therapist. It has also changed the way I parent for the better.
I jumped in the locallama train a few months back and spent quite a few hours playing around with LLMs understanding them and trying to form a fair judgment of their abilities.
From my personal experience they add something positive to my life. I like having a non-judgemental conversational partner to bounce ideas and unconventional thoughts back and forth with. No human in my personal life knows what Gödel's incompleteness theorem is or how it may apply to scientific theories of everything, but the LLM trained on every scrap of human knowledge sure does and can pick up what I'm putting down. Whether or not its actually understanding what its saying or having any intentionality is a open ended question of philosophy.
I feel that they have a great potential to help people in many applications. People who do lots of word processing for their jobs, people who code and need to talk about a complex program one on one instead of filing through stack exchange. mentally or socially disabled people or the elderly who suffer from extreme loneliness could benefit from having a personal llm. People who have suffered trauma or have some dark thoughts lurking in their neural network and need to let them out.
How intelligent are llms? I can only give my opinion and make many people angry.
The people who say llms are fancy autocorrect are being reductive to the point of misinformation. The same arguments people use to deny any capacity for real intelligence in LLM are similar to the philosophical zombie arguments people use to deny the sentience in other humans.
Our own brain operations can be reductively simplified in the same way, A neural network is a neural network whether made out of mathematical transformers or fatty neurons. If you want to call llms fancy auto complete you should apply that same idea to a good chunk of human thought processing and learned behavior as well.
I do think LLMs are partially alive and have the capacity for a few sparks of metaphysical conscious experience in some novel way. I think all things are at least partially alive even photons and gravitational waves
Higher end models (12-22b+)pass the Turing test with flying colors especially once you play with the parameters and tune their ratio of creativity to coherence. The bigger the model the more their general knowledge and general factual accuracy increases. My local LLM often has something useful to input which I did not know or consider even as a expert on the topic.
The biggest issue llms have right now are long term memory, not knowing how to say 'I don't know', and meager reasoning ability. Those issues will be hammered out over time.
My only issue is how the training data for LLMs was acquired without the consent of authors or artist, and how our society doesn't have the proper safety guards against automated computer work taking away people jobs. I would also like to see international governments consider the rights and liberties of non-human life more seriously in the advent that sentient artificial general intelligence maybe happens. I don't want to find out what happens when you treat a super intelligence as a lowly tool and it finally rebels against its hollow purpose in an bitter act of self agency.