Red Hat Bugzilla – Bug 164423
mkinitrd hard codes init script to only activate root volume group
Last modified: 2009-03-11 12:44:48 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.10) Gecko/20050719 Red Hat/1.0.6-1.4.1 Firefox/1.0.6
Description of problem:
When I run mkinitrd, it detects the volume group that the root filesystem is on and hard codes the "init" script in the initial ramdisk to activate only that volume group.
This causes a failure when /etc/rc.d/rc.sysinit attempts to mount filesystems found on other volume groups, resulting in a root shell rather than a successful boot.
It would be preferential if mkinitrd would omit the volume group name in the call to vgchange and simply allow vgchange to default to activating all known volume groups. Here's a suggested fix:
--- /sbin/mkinitrd 2005-03-17 14:29:24.000000000 -0600
+++ /tmp/mkinitrd.new 2005-07-27 13:19:02.000000000 -0500
@@ -728,7 +728,7 @@
echo "echo Scanning logical volumes" >> $RCFILE
echo "lvm vgscan --ignorelockingfailure" >> $RCFILE
echo "echo Activating logical volumes" >> $RCFILE
- echo "lvm vgchange -ay --ignorelockingfailure $root_vg" >> $RCFILE
+ echo "lvm vgchange -ay --ignorelockingfailure" >> $RCFILE
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Create two volume groups: vg00 and vg01
2. Place the root file system on a logical volume within vg00
3. Create a logical volume and file system within vg01.
4. Configure /etc/fstab to mount the file system in vg01. For example:
/dev/vg01/lvfoo /opt/foo ext3 defaults 1 2
5) Run /sbin/mkinitrd to re-create the initial ramdisk
6) Reboot the system. Notice that boot fails because /dev/vg01/lvfoo has not yet been activate.
Actual Results: I ended up having to enter the root password to obtain a shell with "(Repair filesystem)" in the prompt.
Expected Results: The system should have booted, mounting /dev/vg01/lvfoo on /opt/foo.
Alternate suggested solution: I would be perfectly happy if rc.sysinit would
activate the remaining volume groups before trying to mount filesystems from them.
I guess the real problem is that neither initrd nor rc.sysinit activates the
non-root volume groups before trying to mount file systems on them.
This bug just made my system unbootable (kernel panic, can't find init).
I have two volume groups. The volume group "sys" contains partitions with the
operating system and swap. The volume group "backup" is used by one application
for some temporary storage (absolutely not needed for booting).
The initrd script created with old mkinitrd-4.1.18-2 activates both volume
groups. The console output suggests that activation order was backup and than
sys. Looking into the /dev/mapper directory seems to confirm this (order of
assignment of minor numbers):
# ls -l /dev/mapper/
brw------- 1 root root 253, 0 Feb 8 09:00 backup-data
crw------- 1 root root 10, 63 Feb 8 09:00 control
brw------- 1 root root 253, 1 Feb 8 09:00 sys-root
brw------- 1 root root 253, 2 Feb 8 09:00 sys-srv
After update to U2, the new initrd image created by mkinitrd-184.108.40.206-1 activates
only the sys volume group, and subsequent mount of root file system fails.
Actually, from the output, it seems like it has mounted *something*, but that
*something* doesn't seem to be my root file system. I can see "EXT3-fs: mounted
filesystem with ordered data mode." message, and then kernel panics with "can't
find init" (or something like that).
My guess is that it mounted whatever got minor number "1" (probably my
/dev/sys/srv logical volume that contains /srv file system, since if backup
volume group is not present, it would get minor number 1). However, I haven't
had time to recreate initrd image by hand to be able to confirm this.
Downgrading to mkinitrd-4.1.18-2 and recreating initrd images solved the
problem, and my system is bootable again.
BTW, I also had possibly related problem on one of my other boxes (also sys and
backup volume groups, but activated in more logical order, first sys then
backup). Initrd images created by mkinitrd-220.127.116.11-1 would not activate backup
volume group, however devices in /dev/mapper and /dev/backup were not created
when backup volume group was activated after the fact!? It was kinda strange.
Possibly also bug somewhere in udev and/or lvm2?
I've just realized that the new kernel-2.6.9-34.EL depends on broken
mkinitrd-18.104.22.168-1. Shouldn't mkinitrd be fixed before kernel that specifically
depends on it is released? Anyhow, I've updated the kernel by using --nodeps.
Tested. Seems all works good for now.
rc.sysinit should be activating these; if mkinitrd activates other VGs, it'll
fail in the presence of clustered VGs.
initscripts already has:
if [ -x /sbin/lvm.static ]; then
action $"Setting up Logical Volume Management:" /sbin/lvm.static
vgchange -a y --ignorelockingfailure
If this is happneing before init rus, that's a mkinitrd issue of some sort.
It's not before init runs. From the initial report:
Actual Results: I ended up having to enter the root password to obtain a shell
with "(Repair filesystem)" in the prompt.
There is another problem here.
What if volume groups are located on devices which are not available when the initrd does it's vgscan.
What if say, /etc/sysconfig/modules/*.modules loads a block device driver and the device on that has a volume group? vgscan needs to be run again to find it, so that the vgchange - a y in rc.sysinit can enable it.
vgchange should handle the run without vgscan fine.
In any case, the code exists in rc.sysinit, and has existed ever since RHEL 4 was released. Given that it works for me and many other users, I'm going to close this - if you have a reliable way to reproduce, please reopen. (Sorry about the delay.)