Make a raid 5 with two almost full disk and another one empty
Hello,
I am going to upgrade my server, taking advantage of the fact that I am going to be able to put more hard disks, I wanted to take advantage of this to give a little more security (against loss) to my data.
Currently I have 2 hard drives in ext4 with information, and wanted to buy a third (same capacity all three) and place them in raid5, so that in the future, I can put more hard drives and increase the capacity.
Due to economic issues, right now I can only buy what would be the third disk, so it is impossible for me to back up the data I currently have.
The data itself is not valuable, in case any file gets corrupted, I could download it again, however there are enough teras (20) to make downloading everything a madness.
This is madness, but since this is a hobby project and not a production server, there is a way:
Shrink the filesystems on the existing disks to free up as much space as possible, and shrink their partitions.
Add a new partition to each of the three disks, and make a RAID5 volume from those partitions.
Move as many files as possible to the new RAID5 volume to free up space in the old filesystems.
Shrink the old filesystems/partitions again.
Expand each RAID component partition one at a time by removing it from the array, resizing it into the empty space, and re-adding it to the array, giving plenty of time for the array to rebuild.
Move files, shrink the old partitions, and expand the new array partitions as many times as needed until all the files are moved.
This could take several days to accomplish, because of the RAID5 rebuild times. The less free space, the more iterations and the longer it will take.
Even if you could free up only 1GB on each of the drives, you could start the process with a RAID5 of 1GB per disk, migrate two TB of data into it, free up the 2GB in the old disks, to expand the RAID and rinse and repeat. It will take a very long time, and run a lot of risk due to increased stress on the old drives, but it is certainly something that’s theoretically achievable.
Not really with mdadm raid5.
But it sounds like you like to live dangerously. You could always go the BTRFS route. Yeah, I know BTRFS Raid56 "will eat your data", but you said it's nothing that important anyways. There are some things to keep in mind when running BTRFS in Raid5, e.g. scrub each disk individually, use Raid1c3 for metadata for example.
But basically, BTRFS is one of the only filesystems that allows you to add disks of any size or number, and you can convert the profile on the fly, while in use.
So in this case, you could format the new disk with BTRFS as a single disk. Copy over stuff from one of your other disks, then once that disk is empty, add it as a additional device to your existing BTRFS volume. Then do the same with the last disk. Once that is done, you can run a balance convert to convert the single profile into a raid5 data profile.
Also, I would STRONGLY recommend against connecting disks via USB. USB HD adapters are notorious for causing all kinds of issues when used in any sort of advanced setup, apart from temporary single disk usage.
Traditional RAID isn’t very flexible and is meant/easiest for fresh disks without data. Since you’ve already got data in place, look into something like SnapRAID.
I'd suggest you move toward a backup approach ("RAID is not a backup") first. Assuming you have 2x10Tb, get a 3rd and copy half of your files to it, disconnect it, and now half your files are protected. Save, get another, copy the other half, now all your files are protected. If you're trying to do RAID on USB, don't, you are already done, otherwise (using SATA or better) you can proceed to build your array in an orderly fashion.
I know its not backup, but, for me, its the sweet point between money and security. Not only for this 2 hard disk, also for the capacity of add more HDs and don't have all redundancy.
Seriously though it shouldn't give much peace of mind. All raid does is add a little resistance to hardware failures. If you mistakingly delete files you are hosed. If your hardware causes corruption you are hosed. If something happens to your computer such a physical abuse your drives are likely going to be damaged which will also mean that you may be hosed. If one drive dies and then the other drives dies before you move your data over you are also hosed.
The big take away is that Raid only really buys time. It can prevent downtime but it will not save you.
My recommendation would be to utilize LVM. Set up a PV on the new drive and create an LV filling the drive (wit an FS), then move all the data off of one drive onto this new drive, reformat the first old drive as a second PV in the volume group, and expand the size of the LV. Repeat the process for the second old drive. Then, instead of extending the LV, set the parity option on the LV to 1. You can add further disks, increasing the LV size or adding parity or mirroring in the future, as needed. This also gives you the advantage that you can (once you have some free space) create another LV that has different mirroring or parity requirements.
So I see a few problems with what you want, for a raid5 setup you will need at least four drives, since your information is striped against 3 and then the fourth is a parity drive. with 3 drives you have an incredibly high likelyhood of losing your parity drive.
To my knowledge, you will need to wipe the drives to put them in any kind of raid. Since striping is essentially making custom sections of blocks; I don't think mdadm is smart enough to also move data files as well.
I would really recommend holding off on your project till you can back up the information, and get a fourth drive. I know there is a lot of talks between raid5 and raid6, but for me I really prefer the peace of mind that raid6 gives.
You can do RAID 5 with three disks. It's fine. Not ideal, but fine.
My biggest concern is what OP is using as a server. If these disks are attached via USB, they are not going to have reliable connections, and it's going to trigger frequent RAID rescans and resyncs any time one of the three disks drops out. And the extra load from that might cause even more drops.
Seconding this. For starters, when tempted to go for Raid5, go for Raid6 instead. I've had drives fail in Raid5, and in turn have a second failure during the increased I/O associated with replacing a failed drive.
And yes, setting up RAID wipes the drives. Is the data private? If not, a friendly datahoarder might help you out with temporary storage.
It's possible to convert drives to RAID in-place... but strongly discouraged.
Since OP will have a blank drive, they could play musical chairs by setting up a new RAID on the new empty drive, copy data from one drive, wipe that drive, grow the array, copy data from the third drive, wipe, grow... But that's going to take a long time, and you'll have to keep notes about where you are in the process, lest you forget which drive is which over the multiple days this will take.
This is how I do it. No striping, normal partitions, different hard drive sizes, pretty easy. This way makes upgrades super easy too. Currently running 76TB mergerfs with 2 14TB Snapraid parity drives.
If you used ZFS this would be easier to fix. I would recommend switching to it.
It sounds like you need another disk. I know that isn't always possible and if it isn't delete enough data to to copy it over to a single disk. Without backups you are destined to lose your data anyway.
For a ZFS three disk I would go with raidz1 as that will give you one drive for redundancy.
If all of your data won't fit on one single drive, you can't increase your reliability with RAID at this point. You need at least one drive of a size capable of holding all your data to replicate to at least one other drive for RAID 1 at a minimum. Increasing RAID levels from there with replication (not just striping) will only reduce the total amount of space available from the smallest drive capacity in the disk group until you hit a certain number of drives.
Honestly, if you're wanting to increase reliability for fear of data loss, take a run through your data and see if there's anything you can ditch (or easily replace later), see how small that data set can be. Revisit RAID combinations after that.