I can see some minor benefits - I use it for the odd bit of mundane writing and some of the image creation stuff is interesting,, and I knew that a lot of people use it for coding etc - but mostly it seems to be about making more cash for corporations and stuffing the internet with bots and fake content. Am I missing something here? Are there any genuine benefits?
Medical use is absolutely revolutionary.
From GP's consultations to reading tests results, radios, AI is already better than humans and will be getting better and better.
Computers are exceptionally good at storing large amount of data, and with ML they are great at taking a lot of input and inferring a result from that. This is essentially diagnosing in a nutshell.
I read that one LLM was so good at detecting TB from Xrays that they reverse engineered the "black box" code hoping for some insight doctors could use. Turns out, the AI was biased toward the age of the Xray machine that took each photo because TB is more common in developing countries that have older equipment. Womp Womp.
There are supposedly multiple Large Language Model Radiology Report Generators in development. Can't say if any of them are actually useful at all, though.
So you're saying because the LLM isn't operating the machinery and processing the data start to finish without any non-LLM software then none of it is LLM? Stay off the drugs, kid.
Watch any video at random by John Green (vlogbrothers, and author of several successful books that I haven't read) and you'll know more than you could ever hope about TB.
I hadn't considered this. It's interesting stuff. My old doctor used to just Google stuff in front of me and then repeat the info as if I hadn't been there for the last five minutes.