Description of problem: With 4 SCSI disks with 15 partitions each, grouping each pair into a RAID 1 device, almost all of them turned into an LVM physical volume, all of them used in logical volumes, mkblkdevs takes in excess of 30 seconds, resume has taken several minutes without completing and, arranging for that to be skipped (by tweaking the init script), it's mkrootdev that takes forever then. To give you an idea of how bad the situation is, I took an strace output for mkblkdevs run on an already-running system, and watch how the line numbers corresponding to the first visit of each device grow at O(n^2). Can't we use some linear algorithm here? 275:open("/sys/block/sde/dev", O_RDONLY|0x80000 /* O_??? */) = 5 475:open("/sys/block/sde/sde1/dev", O_RDONLY|0x80000 /* O_??? */) = 6 544:open("/sys/block/md30/dev", O_RDONLY|0x80000 /* O_??? */) = 5 685:open("/sys/block/dm-6/dev", O_RDONLY|0x80000 /* O_??? */) = 5 719:open("/dev/mapper/control", O_RDWR) = 5 750:open("/sys/block/sde/slaves", O_RDONLY|O_NONBLOCK|O_DIRECTORY|0x80000) = 6 986:open("/sys/block/dm-5/dev", O_RDONLY|0x80000 /* O_??? */) = 6 1545:open("/sys/block/dm-4/dev", O_RDONLY|0x80000 /* O_??? */) = 6 2474:open("/sys/block/dm-3/dev", O_RDONLY|0x80000 /* O_??? */) = 6 4115:open("/sys/block/dm-2/dev", O_RDONLY|0x80000 /* O_??? */) = 6 6486:open("/sys/block/dm-1/dev", O_RDONLY|0x80000 /* O_??? */) = 6 9308:open("/sys/block/dm-0/dev", O_RDONLY|0x80000 /* O_??? */) = 6 12617:open("/sys/block/md29/dev", O_RDONLY|0x80000 /* O_??? */) = 6 18373:open("/sys/block/md28/dev", O_RDONLY|0x80000 /* O_??? */) = 6 25312:open("/sys/block/md27/dev", O_RDONLY|0x80000 /* O_??? */) = 6 33542:open("/sys/block/md26/dev", O_RDONLY|0x80000 /* O_??? */) = 6 43171:open("/sys/block/md25/dev", O_RDONLY|0x80000 /* O_??? */) = 6 53821:open("/sys/block/md24/dev", O_RDONLY|0x80000 /* O_??? */) = 6 66014:open("/sys/block/md23/dev", O_RDONLY|0x80000 /* O_??? */) = 6 79858:open("/sys/block/md22/dev", O_RDONLY|0x80000 /* O_??? */) = 6 95461:open("/sys/block/md21/dev", O_RDONLY|0x80000 /* O_??? */) = 6 115829:open("/sys/block/md20/dev", O_RDONLY|0x80000 /* O_??? */) = 6 141746:open("/sys/block/md19/dev", O_RDONLY|0x80000 /* O_??? */) = 6 167353:open("/sys/block/md18/dev", O_RDONLY|0x80000 /* O_??? */) = 6 197347:open("/sys/block/md17/dev", O_RDONLY|0x80000 /* O_??? */) = 6 227903:open("/sys/block/md16/dev", O_RDONLY|0x80000 /* O_??? */) = 6 262117:open("/sys/block/md15/dev", O_RDONLY|0x80000 /* O_??? */) = 6 296839:open("/sys/block/md14/dev", O_RDONLY|0x80000 /* O_??? */) = 6 Version-Release number of selected component (if applicable): nash-6.0.14-1.fc8.x86_64
Yep, this algorithm totally sucks.
Aren't there plans to fix this? It makes it impossible to boot Fedora up after installation on any reasonably-configured LVM/RAID configuration that is planned for growth without a need for at least doubling the disk capacity (which is what leads to the large number of partitions and raid devices). Likewise, any system with root on an volume group with a large number (tens) of logical volumes will take hours to boot up. This is unacceptable, and it's silly to wait until RHEL customers run into this before fixing it. It is a major regression that makes Fedora 8 unusable for any sizable host for virtual machines. Can we please make this a blocker, or at least add a note to the release notes? (FWIW, the algorithm is not just O(n^2). At least one part of it is O(n^3), and I get the impression that there's another O(n) on top of that. Unfortunately, I've been traveling too much and I've been unable to set aside the time to figure out what changed that made it so bad, but I think caching the result of is_sysfs_slave(), which is where almost all of the time is spent, could significantly speed things up.
Created attachment 226841 [details] Patch that speeds up mkblkdevs on my (running) desktop from 19.5 down to 4 minutes Here's a patch that shaves off a *huge* amount of system time from mkblkdevs and presumably other nash boot-time operations. It's still too slow, at 4 minutes just for mkblkdevs (compared with nearly nothing back in mkinitrd-6.0.9), but it's better than a system that will take hours to boot up.
I see it didn't make F8 :-( That's unfortunate. Anyhow... Do we even need all this slave-device scanning to boot up? I can't quite see a reason to not just skip this pointless exercise then. Of course we want that to configure the boot up and see what drivers we need, but to actually boot up and create the device nodes, the dependencies are not important, are they?
*** Bug 353391 has been marked as a duplicate of this bug. ***
Still no change for F-9 :-(
Changing version to '9' as part of upcoming Fedora 9 GA. More information and reason for this action is here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping
This bug appears to have been reported against 'rawhide' during the Fedora 10 development cycle. Changing version to '10'. More information and reason for this action is here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping
(In reply to comment #3) > Created an attachment (id=226841) [details] Applied.
Thanks for taking the patch. Unfortunately, I eventually folded and reorganized my disk partitioning to work around this bug, so I can't test the effectivity of this change any more. Regardless, even after manually applying the fix myself, mkinitrd and boot-up were still intolerably slow, at a mere ~80 block devices, and it was not because of lvm :-(
This message is a reminder that Fedora 10 is nearing its end of life. Approximately 30 (thirty) days from now Fedora will stop maintaining and issuing updates for Fedora 10. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as WONTFIX if it remains open with a Fedora 'version' of '10'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version prior to Fedora 10's end of life. Bug Reporter: Thank you for reporting this issue and we are sorry that we may not be able to fix it before Fedora 10 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora please change the 'version' of this bug to the applicable version. If you are unable to change the version, please add a comment here and someone will do it for you. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. The process we are following is described here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping
Fedora 10 changed to end-of-life (EOL) status on 2009-12-17. Fedora 10 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. Thank you for reporting this bug and we are sorry it could not be fixed.