Description of problem:
mkinitrd not processing all PVs in root filesystem LV.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Create 2 md RAID devices
2. Create a PV on each md device
3. Create a VG consisting of the 2 md devices
4. Create a LV on the VG
5. Copy a Fedora 11 root filesystem onto the new LV
6. Fixup /etc/fstab, etc., on the the LV
7. Boot Fedora 11 LiveCD and mount the new LV at /mnt/sysimage
8. chroot /mnt/sysimage
9. Mount /proc, /sys, /boot, etc. to create a "complete" chroot environment
10. Run mkinitrd to create an initrd for the new Fedora 11 installation.
initrd created by mkinitrd will not work, because the generated init
script will only include an "mdadm -A ..." command for one of the md
devices. Digging into mkinitrd, it appears that the logic which
discovered and processed all of the PVs in a VG (handlelvordev in
Fedora 10) is gone, and it doesn't look like there's anything there to
replace it. In addition to the multiple md devices case, this could
bite someone that needed different SCSI drivers for different PVs.
mkinitrd should process all of the PVs that make any required VG, like
I had to use the above procedure, because anaconda is bombing out on my
Yes we got rid of the horrible handlelvordev hack, instead we traverse the
slaves as listed in sysfs, so we should be able to find both md raid sets
just fine, can you please do the following:
And paste the output here, note change the 0 if needed to match the minor
numer of an LV on the troublesome VG
mkinitrd -v -f test.img $(uname -r )&> log
And attach the resulting log file
bash -x /sbin/mkinitrd -f test.img $(uname -r ) &> log
And attach the resulting log file
(In reply to comment #1)
> Yes we got rid of the horrible handlelvordev hack, instead we traverse the
> slaves as listed in sysfs, so we should be able to find both md raid sets
> just fine, can you please do the following:
This assumes that sysfs will actually list all of the physical volumes that
make up a logical volume's volume group. This is not the case.
For a linear LV, sysfs will *only* list PVs that actually hold data from
that LV. I verified this by booting a LiveCD in a VM with two drives and
doing the following:
* create two partitions on each drive -- sda1, sda2, sdb1, and sdb2
* create two RAID-1 devices -- md0 (sda1 and sdb1) and md1 (sda2 and sdb2)
* pvcreate /dev/md0 /dev/md1
* vgcreate test_vg /dev/md0 /dev/md1
* vgdisplay -- note the number of extents in the VG
* create a LV that occupies the entire VG
* note that sysfs lists both PVs as slaves of the LV
* lvremove the LV
* create a LV that is smaller that the smallest md device
* note that sysfs only lists one PV as a slave of the LV
mkinitrd cannot rely on sysfs to do this.
I've talked to Peter Jones about this and we believe this to be an lvm bug,
sysfs should give us a way to find all disks on which a device depends, that
is what the slaves dir is for!
Re-assigning to lvm-team.
If LV (say it is in kerne name dm-0) uses two PVs, there will be properly set up two slaves on undelying devices is sysfs.
If LV is only on one device, there will be only one device as slave.
Kernel device mapper have no idea what VG is - this is higher level abstraction.
You should use lvm tools to properly check which devices must be activated to have all PVs prepared for you volume group.
(What wrong with "vgs --noheadings -o pv_name <VGNAME>" to discover which devices are PVs in VG and need to be activated in initrd for example?)
(In reply to comment #4)
> If LV (say it is in kerne name dm-0) uses two PVs, there will be properly set
> up two slaves on undelying devices is sysfs.
> If LV is only on one device, there will be only one device as slave.
> Kernel device mapper have no idea what VG is - this is higher level
> You should use lvm tools to properly check which devices must be activated to
> have all PVs prepared for you volume group.
> (What wrong with "vgs --noheadings -o pv_name <VGNAME>" to discover which
> devices are PVs in VG and need to be activated in initrd for example?)
Whats wrong with it is that it is not generic, using the sysfs slaves to find out which drivers we need to load for the underlying devices works fine for dmraid / mdraid / whatever, except for lvm.
I agree this is not a kernel bug, its an lvm userspace tools bug, they
should set up things so that first there is one device map for the entire volume group and then the lv's are devicemaps of that devicemap, just like how its done with dmraid sets and partitions on dmraid sets, this way the sysfs tree will properly represent the dependency tree as it actually is.
The fact that the whole VG concept only exists in lvm metadata and is in no
way visible in the kernel representation is a bug.
There's a fundamental misunderstanding of device-mapper here.
Volume groups are now entirely a userspace concept - part of LVM. Device-mapper is much more generic than LVM and works at a lower level.
The dependencies you are seeking are only dependencies within userspace and there is no prospect whatsoever of putting them into the kernel.
I would welcome a patch to mkinitrd to fix this, the best way to fix this is to
modify the handledm() function, and then specifically the "if [ -n "$vg" ]; then"
block, to ask lvm to list all the PV's and then call
for each PV.
(In reply to comment #8)
> I would welcome a patch to mkinitrd to fix this, the best way to fix this is to
> modify the handledm() function, and then specifically the "if [ -n "$vg" ];
> block, to ask lvm to list all the PV's and then call
> findstoragedriver /dev/devicenode
> for each PV.
Excellent! I will try to get this done ASAP.
Created attachment 349820 [details]
Proposed patch to mkinitrd
Thanks for the patch, this is fixed in mkinitrd-6.0.90 .
How about an F11 update that includes this fix? mkinitrd-6.0.87-1.fc11.x86_64
just came out and overwrote my patched version. Could have been nasty if I
hadn't caught it.
(In reply to comment #12)
> How about an F11 update that includes this fix? mkinitrd-6.0.87-1.fc11.x86_64
> just came out and overwrote my patched version. Could have been nasty if I
> hadn't caught it.
6.0.87 hit updates-testing long before your patch became available, if there is going to be another F-11 update I'll add your patch to it.
*** Bug 511338 has been marked as a duplicate of this bug. ***
*** Bug 512555 has been marked as a duplicate of this bug. ***
mkinitrd-6.0.87-3.fc11 has been submitted as an update for Fedora 11.
mkinitrd-6.0.87-3.fc11 works on my system.
mkinitrd-6.0.87-3.fc11 has been pushed to the Fedora 11 stable repository. If problems still persist, please make note of it in this bug report.