I personally feel like K8s has a purpose but not in homelab since our infrastructure is usually small. I don't need clever load-balancing or autoscaling for most of my work.
Oops, RnD team accidentally created indestructible Salmonella bacteria which consumes flesh. Management was pushing RnD to create a better bacteria because hobbyist grill people were killing the bacteria and bypassing the DRM on the grill, but it escaped the lab. It has infected nearly all animals other than sea fish because of proximity. Survivors build floating cities on the sea and thus we have Waterworld!
Regarding if your laptop is under attack, just check your list of processes and see if you find any out of the ordinary processes running. Remove TLauncher before doing this experiment.
AFAIK, you're probably fine since malware on Linux is very very rare but better be safe than sorry.
dating apps/online dating in general is dead, fucked up beyond repair by capitalism, toxic incels, predators, scammers, crooks and most recently AI. No technology can possibly survive such an onslaught and most of them wouldn’t profit from doing so. They have a financial incentive to attract repeat customers
Thank you for writing exactly what I was thinking.
I heard that Japan is starting to implement a government sponsored/made matchmaking app. The core advantage is that the intention of the platform is to actually match people and make people have babies. Plus, if someone is being naughty, the penalties can be much higher than a simple account ban.
No matter what you ask, an LLM will give you an answer. They will never say “I don’t know”
There is a reason for this. LLMs are "rewarded" (just an internal scoring mechanism) for generating an answer. No matter what you say, it will try to maximize the reward value by generating an answer with high hallucination. There is no reward mechanism for saying "I don't know" to a difficult question.
I am not into research on LLMs, but i think this is being worked upon.
I wrote my own but it's manageable. I use mods, resource packs, data packs and have a whitelist. I make sure to backup the compose file from time to time.
If you are having an issue managing it, let me know.
Think of LLMs as the person who gets good marks in exams because they memorized the entire textbook.
For small, quick problems you can rely on them ("Hey, what's the syntax for using rsync between two remote servers?") but the moment the problem is slightly complicated, they will fail because they don't actually understand what they have learnt. If the answer is not present in the original textbook, they fail.
Now, if you are aware of the source material or if you are decently proficient in coding, you can check their incorrect response, correct it, and make it your own. Instead of creating the solution from scratch, LLMs can give you a push in the right direction.
However, DON'T consider their output as the gospel truth. LLMs can augment good coders, but it can lead poor coders astray.
This is not something specific to LLMs; if you don't know how to use Stackoverflow, you can use the wrong solution from the list of given solutions. You need to be technically proficient to even understand which one of the solutions is correct for your usecase. Having a strong base will help you in the long run.
Because of NATting