Bug 182446 - dmraid not activated during boot
Summary: dmraid not activated during boot
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Fedora
Classification: Fedora
Component: mkinitrd
Version: 5
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Peter Jones
QA Contact: David Lawrence
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2006-02-22 16:53 UTC by Dax Kelson
Modified: 2007-11-30 22:11 UTC (History)
0 users

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2006-02-27 05:03:45 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Dax Kelson 2006-02-22 16:53:48 UTC
Description of problem:
I've been testing the new "install to dmraid" feature. Very cool BTW.

I had been working in January, and this week I installed FC5 test 3.

The install finally went OK after several attempts where it would just hang
during package installation -- maybe another bug needs to be filed?

After install during boot it doesn't appear that the "dm" commands in the
initramfs's init script are doing anything.

The init has these commands:

mkdmnod
mkblkdevs
rmparts sda
rmparts sdb
dm create nvidia_hcddcidd 0 586114702 mirror core 2 64 nosync 2 8:16 0 8:0 0
dm partadd nvidia_hcddcidd
echo Scanning logical volumes
lvm vgscan --ignorelockingfailure
echo Activating logical volumes
lvm vgchange -ay --ignorelockingfailure  VolGroup00
resume /dev/VolGroup00/LogVol01

I added echos such as "about to dm create" and then some "sleep 5" after
each of those commands.

There is zero output from mkdmnod on down until the "lvm vgscan" runs.

It produces this output:

device-mapper: 4.5.0-ioctl (2005-10-04) initialised: dm-devel
  Reading all physical volumes. This may take a while...
  No volume groups found
  Unable to find volume group "VolGroup00"
...

HOWEVER, booting into the rescue environment the dm raid is brought up and LVM
activated automatically and correctly.

In the rescue environment the output of "dmsetup table" is:

nvidia_hcddciddp1: 0 409368267 linear 253:0 241038
nvidia_hcddcidd: 0 586114702 mirror core 2 64 nosync 2 8:16 0 8:0 0
VolGroup00-LogVol01: 0 4063232 linear 253:3 83952000
VolGroup00-LogVol00: 0 83951616 linear 253:3 384
nvidia_hcddciddp3: 0 176490090 linear 253:0 409609305
nvidia_hcddciddp2: 0 208782 linear 253:0 63

Another attempt at getting more info, I commented out the "rmparts" from the
init and tried a boot.

When I booted that I did get the expected "duplicate PV found selecting
foo" messages. I rebooted before any writes could happen (I think).

Comment 1 Peter Jones 2006-02-23 23:25:03 UTC
Can you rebuild the initrd with mkinitrd-5.0.27-1 , and see if that fixes it?

Comment 2 Dax Kelson 2006-02-24 17:43:20 UTC
Hopefully I can check this on Saturday, Feb 25th.

Comment 3 Dax Kelson 2006-02-27 05:03:45 UTC
I did a fresh install of FC5 test3. The first boot re-confirmed the initial problem.

I booted the rescue environment with:

"linux rescue selinux=0"

I ran the following commands:

chroot /mnt/sysimage
yum install mkinitrd  (it installed mkinitrd-5.0.28-1)
mkinitrd /boot/initrd-2.6.15-1.1955_FC5.img 2.6.15-1.1955_FC5
exit
exit

It worked! The initramfs correctly activated the dmraid and LVM detected the PV
and volume group.


Note You need to log in before you can comment on or make changes to this bug.