Skip Navigation
How is Everyone Managing secrets in a Compose File Repo

I've migrated off of Portainer to standard docker compose recently so that I can script some major tasks like updating all the containers or restarting all of them. I also liked the idea of being able to put the compose files into a git repo and push it up so that they are automatically backed up. I hope to be able to turn this into more of infrastructure as code implementation where I can edit the repo and have it auto push to my server and redeploy. That's a bit further down the line though.

That said, with the compose files living in their remote, they currently still have their secrets on them, either in a corresponding .env file or in the compose file itself. I really don't like this since if someone ever gains access to the repo they have all my services' secrets. What is the best way to use a git repo for compose files while not exposing a bunch of secrets potentially?

I know podman supports secrets, though I guess I'd have to manually ssh into the server to create them in the session. Currently these services are all through docker however.

7
Is Flatpaks the future for Linux?
  • On the fly atomic updates (the recommended update path for DNF installed apps requires a system reboot.) Though you can do it live, doing offline upgrades is safer so you don't replace some runtime something is using midflight.

    Also, flatpaks have some system isolation and have to use flatpak portals and explicit permissions/mounts giving them less ability to negatively affect my system.

    Also, Flathub just has everything that I need to run anyway, at least for GUI apps.

  • What is your go-to Linux distro and why?
  • Fedora Workstation is what I use for my desktop. If I were to have to reinstall now I'd do Silverblue.

    For my home lab I do Proxmox with a couple of VM's for Ubuntu server for pihole DNS servers and an OpenMediaVault VM for my docker workloads. I'd probably do CoreOS or IoT if I was starting over there though.

  • Red Hat: why I'm going all in on community-driven Linux distros.
  • If the Fedora group starts doing dumb stuff I'll consider switching. But so far they've been rock solid. While Red Hat is certainly a major contributor to Fedora (the biggest easily) they don't control it per-say.

  • A really helpful tool for converting from Docker to Podman
    github.com GitHub - k9withabone/podlet: Generate podman quadlet (systemd-like) files from a podman command

    Generate podman quadlet (systemd-like) files from a podman command - GitHub - k9withabone/podlet: Generate podman quadlet (systemd-like) files from a podman command

    I've been using "Podlet" recently to convert my docker-compose.yaml files over to quadlet files. For those who aren't familiar, Quadlet is basically a halfway point between a compose file and a systemd entry and it simplifies the process for using systemd to manage your podman containers. This tool can convert podman run commands or docker-compose.yaml files into their appropriate quadlet files in one fell swoop. If you have some really complicated stuff going on with networks it may not work though. Especially if they origional docker file is trying to do stuff that's only possible with root.

    Podlet can also be run as a container rather than directly installing it. If you do: podman run --rm --pull=newer -w $PWD -v $PWD:$PWD quay.io/k9withabone/podlet

    It will run it on your current working directory, so if you have a docker-compose.yaml file there it'll spit out the appropriate quadlet files.

    1
    VMs or containers?
  • What I did is install proxmox on the bare metal, setup a vm in which I put the containers.

    Proxmox itself stays (almost) completely stock. The only changes I've made to it were to add the NUT client package so it could gracefully shut down if my NUT server indicates that the UPS is running out of power during an outage.

    In your VMs you can do whatever. Setup OMV, or a stock Ubuntu or Debian vm and install your services on the VM or use Docker/Podman. Setup Fedora CoreOS or IoT vms and host all your services in Podman containers.

    The great thing about Proxmox is you can do snapshot backups which take mere moments to complete. Then pass those off to a NAS where they can survive a irreparable loss of your Proxmox server.

    You can also spin up new vms as needed to just try to fuck around with new techs or just a new way of setting up your home lab. It gives you a ton of flexibility and makes backing stuff up way easier.

    Another great thing you can do is if 3 years down the line you are looking to replace your server hardware with some newer or more powerful stuff you can just add the new device as a node to the cluster. Then you can migrate all your existing VMs over to your new hardware and decommission your old one with very little to no downtime on anything.

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)NI
    Nitrousoxide @lemmy.fmhy.ml
    Posts 2
    Comments 11