Skip Navigation

Nix/Silverblue users: How big is the advantage if you already have 100% automated your deployments via Ansible?

There is a similar question on the site which must not be named.

My question still has a little different spin:

It seems to me that one of the biggest selling points of Nix is basically infrastructure as code. (Of course being immutable etc. is nice by itself.)

I wonder now, how big the delta is for people like me: All my desktops/servers are based on Debian stable with heavy customization, but 100% automated via Ansible. It seems to me, that a lot of the vocal Nix user (fans) switched from a pet desktop and discover IaC via Nix, and that they are in the end raving about IaC (which Nix might or might not be a good vehicle for).

When I gave Silverblue a try, I totally loved it, but then to configure it for my needs, I basically would have needed to configure the host system, some containers and overlays to replicate my Debian setup, so for me it seemed like too much effort to arrive nearly at where I started. (And of course I can use distrobox/podman and have containerized environments on Debian w/o trouble.)

Am I missing something?

54 comments
  • In this comparison, the devil is in the detail.

    With Ansible, you have an initial condition onto which you add additional state through automatically executed steps dictated by you until you (hopefully) arrive at a target state. This all happens through modification of one set of state; each step receives the state of the previous step, modifies it and passes the entire state onto the next step. The end result is not only dependant on your declared steps but also the initial state. A failure in any step means you're left in an inconsistent state which is especially critical for the case of updating an existing state which is the most common thing to do to a Linux system.

    In NixOS, you describe the desired target state and the NixOS modules then turn that description into compartmentalised bits of independent state. These are then cheaply and generically combined into a "bundle"; wrapping them into one big "generation" that contains your entire target state.
    Your running system state is not modified at any point in this process. It is fully independent, no matter what the desired system is supposed to be. It is so independent in fact that you could do this "realisation" of the NixOS system on any other system of the same platform that has Nix installed without any information about the state of the system it's intended to be deployed on.
    This "bundle" then contains a generic script which applies the pre-generated state to your actual system in a step that is as close to atomic as possible.
    A good example for this are packages in your PATH. Rather than sequentially placing the binaries into the /usr/bin/ directory as a package manager would when instructed by ansible to install a set of packages, NixOS merely replaces the bin symlink with one that points at an entirely new pre-generated directory which contains the desired packages' binaries (well, symlinks to them for efficiency). There cannot possibly be an in-between state where only some of the binaries exist; it's all or nothing. (This concept applies to all parts that make up a Linux system of course, not just binaries in the PATH. I just chose that as an easy to understand example.)
    By this property, your root filesystem no longer contains any operating system configuration state. You could wipe it and NixOS would not care. In fact, many NixOS users do that on every boot or even use a tmpfs for /.

    (Immutability is a property that NixOS gains almost by accident; that's not its primary goal.)

  • Everyone here have already explained their various stances very eloquently and convincingly - so I won't argue against that - so instead I'll just put forth my own 2c on why I use Silverblue instead of Nix/Ansible.

    The main draw for me in using Silverblue (well, uBlue to be exact) is the no-cost, cloud-based, industry-standard, CI/CD and OCI workflow. Working with these standard technologies also helps me polish up my skills for work, as we've started to make use of containers and gitops workflows, so the skills that I'm gaining at personal level are easily translatable for work (and vice-versa).

    With Nix (the declarative way), I'd have to learn the Nix language first and maintain the non-standard Nix config files and, tbh, I don't want to waste so much time on something that no one in the industry actually uses. Declarative Nix won't really help me grow professionally, and whilst I agree it has some very unique advantages and use-cases, it's completely overkill for my personal needs. In saying that, I'm happy with using Nix the imperative way though - I don't need to learn the Nix language, and it's great having access to a vast package repository and access my programs without having to go thru the limitations of containers.

    As for Ansible, I'd have to have my own server (and pay for it, if it's in the cloud), and spend time maintaining it too. And although we use Ansible at work as well, so the skills I gain here won't be waste of time, it's unfortunately too inflexible/rigid for my personal needs - my personal systems are constantly evolving, whether it is in the common packages I use, or my choice of DE (my most recent fling is with Wayfire) etc. With an Ansible workflow, I'd be constantly editing yaml files instead of actually making the change I want to see. It's overkill for me, and a waste of time (IMO). You could argue that I'm already editing my configs on Github with uBlue, but it's nowhere as onerous as having to write playbooks for every single thing. And as I mentioned, I like to maintain some flexibility and manual control over my personal machines and Ansible will just get in the way of that.

    With the uBlue workflow, I just maintain my own fork on Github with most of my customisations, + a separate repository for specific dotfiles and scripts that I don't want to be part of my image. Pull bot keeps my main uBlue repo in sync with upstream, and I only need to jump in if there's some merge conflicts that cannot be resolved automatically. At the end of it all, I get a working OCI image, with a global CDN and 90 days of image archives, allowing for flexible rollback options - all of this without incurring any costs or wasting too much time on my part. Plus I can easily switch between different DEs and OCI distros, with just a simple rebase - I could go from a Steam-Deck like gaming experience (Bazzite) to a productivity-oriented workstation (Bluefin), or play around with some fancy new opinionated environments like Hyprland and River (Wayblue) - all with just a simple rebase and a reboot, without needing to learn some niche language or waste time writing config files. How cool is that?

  • Ansible must examine the state of a system, detect that it is not in the desired state and then modify the current state to get it to the desired state. That is inheritently more complex than building a immutable system that is in the desired state by construction and can not get out of the desired state.

    It's fine as ,one as you use other people's rules for ansible and just configure those, but it gets tricky fast when you start to write your own. Reliably discovering the state of a running system is surprisingly tricky.

  • I wonder now, how big the delta is for people like me: All my desktops/servers are based on Debian stable with heavy customization, but 100% automated via Ansible.

    Close to none. Immutable solve the same problem that was solved years ago with Ansible and BTRFS/ZFS snapshots, there's an important long-term difference however...

    Immutable distros are all about making thing that were easy into complex, “locked down”, “inflexible”, bullshit to justify jobs and payed tech stacks and a soon to be released property solution. We had Ansible, containers, ZFS and BTRFS that provided all the required immutability needed already but someone decided that is is time to transform proven development techniques in the hopes of eventually selling some orchestration and/or other proprietary repository / platform like Docker / Kubernetes does. Docker isn’t totally proprietary and there’s Podman but consider the following: It doesn’t really matter if there are truly open-source and open ecosystems of containerization technologies. In the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term.

    “Oh but there are truly open-source immutable distros” … true, but again this hype is much like Docker and it will invariably and inevitably lead people down a path that will then require some proprietary solution or dependency somewhere (DockerHub) that is only required because the “new” technology itself alone doesn’t deliver as others did in the past. Those people now popularizing immutable distributions clearly haven’t had any experience with it before the current hype. Let me tell you something, immutable systems aren’t a new thing we already had it with MIPS devices (mostly routers and IOTs) and people have been moving to ARM and mutable solutions because it’s better, easier and more reliable.

    The RedHat/CentOS fiasco was another great example of this ecosystems and once again all those people who got burned instead of moving to a true open-source distribution like Debian decided to pick Ubuntu - it’s just a matter of time until Canonical decides to do some move.

    Nowadays, without Internet and the ecosystems people can’t even do shit anymore. Have a look at the current state of things when it comes to embedded development, in the past people were able to program AVR / PIC / Arduino boards offline and today everyone moved to ESP devices and depends on the PlatformIO + VSCode ecosystem to code and deploy to the devices. Speaking about VSCode it is also open-source until you realize that 1) the language plugins that you require can only compiled and run in official builds of VSCode and 2) Microsoft took over a lot of the popular 3rd party language plugins, repackage them with a different license… making it so if you try to create a fork of VSCode you can’t have any support for any programming language because it won’t be an official VSCode build. MS be like :).

    All those things that make development very easy and lowered the bar for newcomers have the dark side of being designed to reconfigure and envelope the way development gets done so someone can profit from it. That is sad and above all set dangerous precedents and creates generations of engineers and developers that don’t have truly open tools like we did.

    This is all about commoditizing development - it’s a negative feedback loop that never ends. Yes I say commoditizing development because if you look at it those techs only make it easier for the entry level developer and companies instead of hiring developers for their knowledge and ability to develop they’re just hiring “cheap monkeys” that are able to configure those technologies and cloud platforms to deliver something. At the end of the they the business of those cloud companies is transforming developer knowledge into products/services that companies can buy with a click.

54 comments