I'm in the process of wiring a home before moving in and getting excited about running 10g from my server to the computer. Then I see 25g gear isn't that much more expensive so I might was well run at least one fiber line. But what kind of three node ceph monster will it take to make use of any of this bandwidth (plus run all my Proxmox VMs and LXCs in HA) and how much heat will I have to deal with. What's your experience with high speed homelab NAS builds and the electric bill shock that comes later? Epyc 7002 series looks perfect but seems to idle high.
I've got a 3 node Proxmox/ceph cluster with 10G, plus a separate Nas. They are all rack mount with dual PSU. Add in the necessary switching, and my average load is about 800w. Throw my desktop (also on 10G) into the mix and it runs 1.1kw.
That's roughly $50-60 extra in electricity costs for me monthly.
Would be around 300€ in Germany, on a cheap contract.
Limiting myself to one combined NAS/application server atm, with the others turned on only if I want to try sth out.
Yeah, we pay a lot. We also got one of the lowest downtimes regarding electricity, on average approximately 10minutes per year..so that's kind of a (small) advantage you get for the premium price
I ise about the same. But that is more due to the hardware I got being a bit older. 2 dell R710s 1 R510 and a custom build server. Everything is still 1g. In my case electricity is not a big deal due to solar. We produce much more then we can use our self.
I'm afraid of dumping 500+ watts into a (air conditioned) closet. How are you able to saturate the 10g? I had some idea that ceph speed is that of the slowest drive, so even SATA SSDs won't fill the bucket. I imagine this is due to file redundancy not parity/striping spreading the data. I'd like to stick to lower power consumer gear but ceph looks CPU, RAM, and bandwidth (storage and network) hungry plus low latency.
I ran proxmox/ceph over 1GB on e-waste mini PCs and it was... unreliable. Now my NAS is my HA storage but I'm not thrilled to beat up QLC NAND for hobby VMs.
My 10G is far from saturated, but I do try and keep things using RAM where possible. I figure that with 100gb of DDR4 in my main server, that should be able to provide enough speed for a 10G link.
I've got ceph running on Intel Enterprise SSDs, so they are pretty quick.
I also tried running ceph on 1G. I found it unreliable as well.