Bug 182446 - dmraid not activated during boot
dmraid not activated during boot
Status: CLOSED RAWHIDE
Product: Fedora
Classification: Fedora
Component: mkinitrd (Show other bugs)
5
All Linux
medium Severity medium
: ---
: ---
Assigned To: Peter Jones
David Lawrence
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2006-02-22 11:53 EST by Dax Kelson
Modified: 2007-11-30 17:11 EST (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2006-02-27 00:03:45 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Dax Kelson 2006-02-22 11:53:48 EST
Description of problem:
I've been testing the new "install to dmraid" feature. Very cool BTW.

I had been working in January, and this week I installed FC5 test 3.

The install finally went OK after several attempts where it would just hang
during package installation -- maybe another bug needs to be filed?

After install during boot it doesn't appear that the "dm" commands in the
initramfs's init script are doing anything.

The init has these commands:

mkdmnod
mkblkdevs
rmparts sda
rmparts sdb
dm create nvidia_hcddcidd 0 586114702 mirror core 2 64 nosync 2 8:16 0 8:0 0
dm partadd nvidia_hcddcidd
echo Scanning logical volumes
lvm vgscan --ignorelockingfailure
echo Activating logical volumes
lvm vgchange -ay --ignorelockingfailure  VolGroup00
resume /dev/VolGroup00/LogVol01

I added echos such as "about to dm create" and then some "sleep 5" after
each of those commands.

There is zero output from mkdmnod on down until the "lvm vgscan" runs.

It produces this output:

device-mapper: 4.5.0-ioctl (2005-10-04) initialised: dm-devel@redhat.com
  Reading all physical volumes. This may take a while...
  No volume groups found
  Unable to find volume group "VolGroup00"
...

HOWEVER, booting into the rescue environment the dm raid is brought up and LVM
activated automatically and correctly.

In the rescue environment the output of "dmsetup table" is:

nvidia_hcddciddp1: 0 409368267 linear 253:0 241038
nvidia_hcddcidd: 0 586114702 mirror core 2 64 nosync 2 8:16 0 8:0 0
VolGroup00-LogVol01: 0 4063232 linear 253:3 83952000
VolGroup00-LogVol00: 0 83951616 linear 253:3 384
nvidia_hcddciddp3: 0 176490090 linear 253:0 409609305
nvidia_hcddciddp2: 0 208782 linear 253:0 63

Another attempt at getting more info, I commented out the "rmparts" from the
init and tried a boot.

When I booted that I did get the expected "duplicate PV found selecting
foo" messages. I rebooted before any writes could happen (I think).
Comment 1 Peter Jones 2006-02-23 18:25:03 EST
Can you rebuild the initrd with mkinitrd-5.0.27-1 , and see if that fixes it?
Comment 2 Dax Kelson 2006-02-24 12:43:20 EST
Hopefully I can check this on Saturday, Feb 25th.
Comment 3 Dax Kelson 2006-02-27 00:03:45 EST
I did a fresh install of FC5 test3. The first boot re-confirmed the initial problem.

I booted the rescue environment with:

"linux rescue selinux=0"

I ran the following commands:

chroot /mnt/sysimage
yum install mkinitrd  (it installed mkinitrd-5.0.28-1)
mkinitrd /boot/initrd-2.6.15-1.1955_FC5.img 2.6.15-1.1955_FC5
exit
exit

It worked! The initramfs correctly activated the dmraid and LVM detected the PV
and volume group.

Note You need to log in before you can comment on or make changes to this bug.