Red Hat Bugzilla – Bug 182446
dmraid not activated during boot
Last modified: 2007-11-30 17:11:24 EST
Description of problem:
I've been testing the new "install to dmraid" feature. Very cool BTW.
I had been working in January, and this week I installed FC5 test 3.
The install finally went OK after several attempts where it would just hang
during package installation -- maybe another bug needs to be filed?
After install during boot it doesn't appear that the "dm" commands in the
initramfs's init script are doing anything.
The init has these commands:
dm create nvidia_hcddcidd 0 586114702 mirror core 2 64 nosync 2 8:16 0 8:0 0
dm partadd nvidia_hcddcidd
echo Scanning logical volumes
lvm vgscan --ignorelockingfailure
echo Activating logical volumes
lvm vgchange -ay --ignorelockingfailure VolGroup00
I added echos such as "about to dm create" and then some "sleep 5" after
each of those commands.
There is zero output from mkdmnod on down until the "lvm vgscan" runs.
It produces this output:
device-mapper: 4.5.0-ioctl (2005-10-04) initialised: firstname.lastname@example.org
Reading all physical volumes. This may take a while...
No volume groups found
Unable to find volume group "VolGroup00"
HOWEVER, booting into the rescue environment the dm raid is brought up and LVM
activated automatically and correctly.
In the rescue environment the output of "dmsetup table" is:
nvidia_hcddciddp1: 0 409368267 linear 253:0 241038
nvidia_hcddcidd: 0 586114702 mirror core 2 64 nosync 2 8:16 0 8:0 0
VolGroup00-LogVol01: 0 4063232 linear 253:3 83952000
VolGroup00-LogVol00: 0 83951616 linear 253:3 384
nvidia_hcddciddp3: 0 176490090 linear 253:0 409609305
nvidia_hcddciddp2: 0 208782 linear 253:0 63
Another attempt at getting more info, I commented out the "rmparts" from the
init and tried a boot.
When I booted that I did get the expected "duplicate PV found selecting
foo" messages. I rebooted before any writes could happen (I think).
Can you rebuild the initrd with mkinitrd-5.0.27-1 , and see if that fixes it?
Hopefully I can check this on Saturday, Feb 25th.
I did a fresh install of FC5 test3. The first boot re-confirmed the initial problem.
I booted the rescue environment with:
"linux rescue selinux=0"
I ran the following commands:
yum install mkinitrd (it installed mkinitrd-5.0.28-1)
mkinitrd /boot/initrd-2.6.15-1.1955_FC5.img 2.6.15-1.1955_FC5
It worked! The initramfs correctly activated the dmraid and LVM detected the PV
and volume group.