There are some traps I'll consider a calculated risk.
I currently (until I eventually get around to setting up a jump sever) use this exact setup. This is because CF tunnel is free, easy, and bypasses any ISP-level tomfoolery that blocks port forwarding, which the last being the most crucial to me.
I will eventually get around to setting up my own equivalent tunnel, however that's not free and not as easy as CF tunnel.
Several years ago I had raw milk on a farm and it tasted incredible. I imagine that has more to do with the fact that that it gone from cow to mouth in about 30 second than with pasturization, right?
TLDR pls.
Arch, because I use niche software and the AUR doesn't always get along with Manjaro very well (ungoogled-chromium-bin is the worst offender). Switched to arch, configured it identically to my manjaro install, and all has been well.
Firefox (well, librewolf, but forks are a matter of personal preference).
Chrome (Ungoogled chromium) is used as a fallback for the occasional site that doesn't work with my restrictive FF configuration.
Both have uBlock, though they're configured differently to suit their individual purposes.
That looks like one of those multi-color pens. :)
This is literally the first post I saw when opening the app. I guess I'll do something else.
Quick search to verify...
So this is how I learn. Wouldn't have it any other way.
You need an absolutely insane amount of data to train LLMs. Hundreds of billions to tens of trillions of tokens. (A token isn't the same as a word, but with numbers this massive it doesn't even matter for the point.)
Wikipedia just doesn't have enough data to make an LLM off of, and even if you could do it and get okay results, it'll only know how to write text in the style of Wikipedia. While it might be able to tell you all about the how different cultures most commonly cook eggs, I doubt you'll get any recipe out of it that makes sense.
If you were to take some base model (such as llama or gpt) and tune it in Wikipedia data, you'll probably get a "llama in the style of Wikipedia" result, and that may be what you want, but more likely not.
You can tell git to use a specific key for each repo. I have the same situation as you and this is how I handle it.
https://superuser.com/questions/232373/how-to-tell-git-which-private-key-to-use
Docker, using the nextcloud:stable
image (not-all in-one) with postgres, behind nginx, and finally ZFS with 2x modern HDDs for storage. I run the stock apps plus a small handful, and have carried the same database through many versions over the last 5 years.
It's usable, but definitely not snappy.
The web interface for files is fine. Not instantaneous at all but not a huge problem. I have about 1TB of files (images and videos) in one folder, then varying files everywhere else. I suspect that the number of files (but probably not the size) is causing the slowdown.
Switching to, for example, the notes app is incredibly slow, and the NC Android app is just as bad.
Return for refund or replacement. If you're even slightly concerned about WD giving you trouble, but know eBay/the seller won't, just go that path since it's still available.
As a heads up, EDMC runs natively on Linux well, or at least it did the last time I used it. See https://github.com/EDCD/EDMarketConnector/wiki/Installation-&-Setup#linux-with-steam-play
Since you've got it running in wine just fine, I personally wouldn't change anything, but if you have issues in the future, you can try that.
The main donate block of items is here: https://github.com/LemmyNet/joinlemmy-site/blob/main/src/shared/components/common.tsx#L129
Follow the chain and you'll see it's just using the progress
html element.
It seems to be manually updated.
https://github.com/LemmyNet/joinlemmy-site/blob/main/src/shared/donation_stats.ts