Description of problem: anaconda fsset.py RAIDDevice::setupDevice does not pass --bitmap=internal to mdadm --create. This prevents arrays from checkpointing their reconstruct activities, causing a full reconstruct starting from scratch if a reconstruct is interrupted. This may have some performance impact, which should be measured. See mdadm manpage for details on --bitmap usage. Version-Release number of selected component (if applicable): anaconda-11.4.1.18 How reproducible: always Steps to Reproduce: 1. read code, note --create does not use --bitmap
Adding dledford to get his comments (... of course, if --bitmap=internal is the best practice, maybe mdadm should default to creating raid arrays with it :)
There are many things that anaconda doesn't do right in terms of mdadm raid arrays. 1) We don't support version 1 superblocks 2) We don't support bitmaps 3) We don't support names other than /dev/md? 4) We don't support partitioned md raid arrays 5) We don't support using whole disk devices as md constituents 6) We don't support any sort of chunksize/stripesize setting or other performance tuning items on array creation (and it's damn hard to change this stuff after the fact) I'm all for fixing anaconda's raid setup stuff, but if we are going to take the time to fix it, I suggest we do it right so we don't have to come back in 6 months and do it again. As for the default on md bitmaps, the upstream mdadm maintainer hasn't made it the default because it involves a certain amount of trade offs that he wants the end user to make the decision on. Having one is necessary in order to support partial reconstruction, and it also makes things like rebuilding a that dropped out of an array momentarily much faster as it only resyncs the blocks that have changed since the device dropped out of the array. The down side is that it requires that writes to the array be preceded by a write to the bitmap (we use lazy bitmap clearing, so the clear side isn't so bad) and this effects write performance.
Many of the things listed are likely to be site preferences, and creating UI for that would be a bit... fugly. I'd like to see the things fixed that are really best practices, and not depending on the end user to configure. If we're really going to do the "all or nothing" approach, we need to pick a more appropriate target. Either way we're past the 10 feature freeze, which means we really shouldn't be doing anything for 10. Moving to 11Target.
Radek - I believe you handled the RAID stuff for the storage rewrite. Can you please take a look at what's being asked here and see what makes sense to do? You'll probably need to do a lot of asking around.
I'm waiting on this still. Last night I had to do a manual upgrade from an earlier Fedora install, which used the md 0.90 superblock. Moving from 2 250GB disks to 2 1TB disks was not possible due to md 0.90, which would have been possible using the 1.2 superblock. Catch 22. Because the 0.90 superblock was at the end of the disk partition, it can't be read if you grow the partition size in the partition table. You can't use mdadm --grow to grow a device unless it's active, and you can't edit the partition table and re-read it while it is active. The user can disable use of the block bitmap if he so chooses, or re-enable it later, if we're using the new 1.2 superblock. If we stay with the old 0.90 superblock. _Please_ start using the new 1.2 superblock with bitmap enabled! -Matt
I am closing the bug per https://bugzilla.redhat.com/show_bug.cgi?id=619282#c2 which applies for Fedora too.