My favourite recent one is Yunohost, which makes it super easy to spin up a little self-hosted server with a bunch of apps. I've been having good fun with that and a spare Raspberry Pi lately.
It's not quite as point-and-click, but I'm using Docker for that because Yunohost kept messing up updates. Most server apps will have some instructions on how to run them in docker, especially a docker-compose.yml file, so you don't have to rely on the Yunohost team to package said app.
The way I do it is that I put each suggested compose file in their own file, and import them in my main docker-compose.yml file like this:
version: '3'
include:
- syncthing.yml
Then just run docker compose pull && docker compose up -d every time you change something or want to update your apps, and you're good to go.
Software updates in particular are waaaaaayyy easier on Docker than Yunohost.
This has uncovered my shameful Linux confession lol - I don't understand Docker at all. I think I'm reasonably okay with Linux stuff, I can put an Arch install together without using the archinstall script, I got NixOS up and running without too much trouble etc. but I just can't get my head around how Docker is supposed to work for some reason.
For self-hosting purposes, Docker = lightweight disposable VMs that are configured via docker-compose.yml. All important data should be in "volumes", which are just shared folders between the host and the container.
The end result is that you can delete and re-create containers at any time and they should just pick up where they left off from the data that's in these volumes.
Each individual published image has some paths they want to use for that; everything is usually specified in their example docker-compose files.
If you're not a dev, don't even try to understand Dockerfiles, it's not for you.