Skip Navigation

VMs or containers?

I'm thinking about starting a self hosting setup, and my first thought was to install k8s (k3s probably) and containerise everything.

But I see most people on here seem to recommend virtualizing everything with proxmox.

What are the benefits of using VMs/proxmox over containers/k8s?

Or really I'm more interested in the reverse, are there reasons not to just run everything with k8s as the base layer? Since it's more relevant to my actual job, I'd lean towards ramping up on k8s unless there's a compelling reason not to.

76 comments
  • VMs are often imperative and can be quite easy and familiar to setup for most people, but can be harder or more time-consuming to reproduce, depending on the type of update or error to be fixed. They have their own kernel and can have window managers and graphical interfaces, and can therefore also be a bit resource heavy.

    Containers are declarative and are quite easy to reproduce, but can be harder to setup, as you'll have to work by trial-and-error from the CLI. They also run on your computers kernel and can be extremely slimmed down.

    They are both powerful, depends how you want to maintain and interface with them, how resource efficient you want them to be, and how much you're willing to learn if necessary.

    • That sums it up really well.

      I generally tend to try to use containers for everything and only branch out to VMs if it doesn't work or I need more separation.

      This is my general recommendation as containers are easier to set up and in my opinion individual software packages are easier to maintain with things like compose. I have limited time for my self hosted instance and that took away a lot of work, especially when updating.

    • That sums it up really well.

      I generally tend to try to use containers for everything and only branch out to VMs if it doesn't work or I need more separation.

      This is my general recommendation as containers are easier to set up and in my opinion individual software packages are easier to maintain with things like compose. I have limited time for my self hosted instance and that took away a lot of work, especially when updating.

  • It depends on your use case and what you are trying to achieve.

    You do not need k8s (or k3s...) to use containers though. Plain old containers could also suffice, or Docker Swarm if you need some container orchestration functionality.

    Trying to learn k8s would be a good reason to use k8s though :)

  • I personally really, really like (Docker) containers and I host most of my stuff with it, on a Raspberry Pi and on (free tier) Oracle Cloud VPS's. I also plan to (re)install Proxmox on a spare old laptop and run some stuff in VMs on that (namely Home Assistant) and might try a NixOS server too.

    So really, use both. Use the right tool for the job. And you can also run containers in VMs and even use Ansible to configure everything with playbooks, allowing you to re-run said playbooks when things go wrong.

  • Containers, unless you have a specific need for a VM.

    With a VM you have to reserve resources exclusively. If you give a VM 2gb of ram, then that's 2gb of ram that you can't use for other things, even if the guest OS is using less.

    With Containers, you only need as many resources as the process inside the container requires at the time.

  • If everything you want to run makes sense to do within k8s it is perfectly reasonable to run k8s on some bare-metal OS. Some things lend themselves to certain ways of running them better than others. E.g. Home Assistant really does not like to run anywhere but a dedicated machine/VM (at least last time I looked into it).

    Regardless of k8s it may make sense to run some sort of virtualization layer just to make management easier. One panel you can use to access all of the machines in you k8s cluster from a console level can be pretty nice, and a Proxmox cluster would give you this. You can make a VM on a host that takes up basically all of the available RAM/CPU on it. Proxmox specifically has some built-in niceties with gluster (which I've never use, I manage gluster myself on bare metal) which could even be useful inside a k8s cluster for PVCs and the like.

    If you are willing to get weird (and experimental) look into Rancher's Harvester it's an HCI platform (similar to Proxmox or vSphere) that uses k8s as its base layer and even manages VMs through k8s APIs... I played with it a bit and it was really neat, but opted for bare metal Ubuntu for my lab install (and actually moved from rke2 to k3s to Nomad to docker compose with some custom management/clustering over the course of a few years).

  • If it's relevant for your job, go for k8s. The more you tinker with it, the more knowledge you'll accumulate. Is it the optimal solution for a self hosting setup? Well, it depends but most probably not.

  • Basically, it's "why not both?"

    So first, kubernetes is a different ball of wax than containers, and if you want to run it on one machine you can, but it's really for running containers across a cluster of machines. I'm guessing you just generally mean containers so I'll go with that.

    Containers are essentially just apps running on a virtual os. Virtual machines are an OS running on virtual hardware. You can abstract both layers and have virtual hardware running an os that runs a virtual os for your containers, and nothing will really mind - in fact that's kind of the way to do it if you have one big machine you need to run a bunch of services on. You might cut up a server into a Linux VM, a Windows VM, and a BSD VM, and run containers on each one. Or you might run 3 Linux VMs and have the containers for 3 different services split between them.

    It really depends on what you're hosting and trying to do for how exactly to go about it. Take for instance a pretty common self hosted stack:

    Plex Radarr Prowlarr Deluge TrueNAS

    Now you could install TrueNAS scale and run all of those as containers on it, and it would work ok, but TrueNAS scale isn't really meant for managing a ton of containers right now. You could make a vm on it for each service and have them all talk to each other but then you're probably wasting resources by duplicating the OS 5 times. Also, what if you want to run TrueNAS core instead of scale? Can you get everything else working in jails -- maybe? -- but it'll probably be a pain.

    Instead, you might install proxmox and pass through the drive controller, and set up one VM for TrueNAS core. Then you might make another VM for the arrs containers, and a third for Plex itself.

    It gets you the best of both worlds. TrueNAS can run on BSD instead of Linux, your arrs are easy to deploy and update in containers that keep everything separated, and Plex is sequestered in a hardened os with read only access to everything else since it gets a port forwarded and is more of a security risk. Again that's just one option though.

    VMs get you a ton of really handy things like snapshots and for simple VMs, very easy portability between relatively similar hardware. I'll probably get ruined for saying this but they're also a security tool that you should probably keep in your belt. If someone manages to break out of a container and your files are just sitting there for the taking that's not great. If someone manages to break into your VM and "the good stuff" is on another VM that's another layer of security they have to break through.

    Containers on the other hand use way fewer resources, especially ram - and are much easier to wrangle than many OSes for updates and config.

    There's really a lot of self hosted stuff that assumes you're running docker and treats regular install as a kind of weird edge case, so you'll probably run docker even if you don't want to.

    Kubernetes on the other hand I would argue isn't really meant for self hosting where you probably have a one or two servers that you own. Its meant to deploy containers across various cloud servers in a way that's more automated to manage. If you need storage in a kubernetes cluster you'll probably use something like s3 buckets, not a hard drive.

    If you want to learn it you can totally deploy it on a computer running a few VMs as nodes or with a few laptops / SBCs as a cluster, but if you just want the services to run on your server in the closet it's a bit like using a sledgehammer to nail a chair back together. That's why you don't tend to see it talked about as much - it's a bit of a different rabbit hole.

  • I'd suggest looking into k8s. It's definitely a bit more complex on the start, but so much more power once you get to the details. VMs you don't share the base OS layer and the hardware, you have to pre-define the resources you need per app in a more constrained manner, while containers can move freely in their little sandbox to pickup whatever it needs.

    It is also much easier to manage replicas, upgrades, scale and a bunch of other things once you are using containers and an orchestrator like Kubernetes. Let me know if you need any help/insights. I've been trying to post more videos/answers about things that could be complicated.

  • K8s are more complex than containers using proxmox. If you are up for the challenge sure go crazy.

  • Why not do both? I run proxmox on my physical hardware, then have guest VMs within proxmox that run k8s.

    Advantages of proxmox:

    • Proxmox makes it easy to spin up VMs for non self host purposes (say I want to play with NixOS)
    • Proxmox snapshots make migrations and configuration changes a bit safer (I recently messed up a postgres 15 migration and was able to roll back in a button press)

    You can then just run docker images through Proxmox, but I like k8s (specifically k3s) because:

    Advantages of k8s:

    • Certmanager means your HTTP services automatically get assigned TLS certs essentially for free (once you've set up cert manager for the first time, anyway)
    • I find k8s' YML-based configuration easier to track and manage. I can spin my containers up fresh just from my config, without worrying about stray environment settings I might not have backed up.
    • k8s makes it easy for me to reason about which services are exposed internally to each other, and which are exposed on the host outside of my k8s cluster.
    • k8s services get persistent DNS and IPs within the cluster, so configuring nodes to talk to each other is very easy.

    And yeah, this way I get to learn two technologies rather than one 😁

  • VMs if you have enough RAM and/or need to run something on a non-compatible system (like pfsense on ARM). Containers for everything else.

76 comments