I don't think so. They're going to have to do a lot better than a tutorial to win people back. That said, the two Flux models being distilled making them close to impossible to fine-tune sucks too.
Those might just be LoRA merged models, not full fine-tuning. From what I heard, fine-tuning doesn't work because the models are distilled. You'd have to find a way to undistill them to train them.