Description of problem: After RHN upgrade from kernel-smp-2.6.9-5.EL to kernel-smp-2.6.9-42.0.2.EL, system with root partition under LVM2 does not boot. Version-Release number of selected component (if applicable): Red Hat Enterprise Linux AS (2.6.9-42.0.2.ELsmp) lvm2-2.02.06-6.0.RHEL4 kernel-smp-2.6.9-42.0.2.EL How reproducible: always Steps to Reproduce: 1. system was initially partitioned/formatted/installed with root partition under LVM2 using lvm2-2.00.31-1.0.RHEL4 and kernel-smp-2.6.9-5.EL 2. via RHN, it was upgraded (without any problems) to lvm2-2.02.06-6.0.RHEL4 and kernel-smp-2.6.9-42.0.2.EL 3. system reboot with new kernel results in kernel panic Actual results: System boot normally with old kernel; with new kernel boot aborts with 'kernel panic': Booting 'Red Hat Enterprise Linux AS (2.6.9-42.0.0.ELsmp)' root (hd0,0) Filesystem type is ext2fs, partition type 0x83 kernel /vmlinuz-2.6.9-42.0.0.ELsmp ro root=/dev/VolGroup00/LogVol00 [Linux-bzImage, setup=0x1400, size=0x15f4d2] initrd /initrd-2.6.9.-42.0.2.ELsmp.img [Linux-initrd @ 0x32edc000, 0x113561 bytes] Uncompressing Linux... Ok, booting the kernel. ide0: I/O resource 0x1F0-0x1F7 not free. ide0: ports already in use, skipping probe Red Hat nash version 4.2.1.8 starting Reading all physical volumes. This may take a while... No volume groups found Volume group "VolGroup00" not found ERROR: /bin/lvm exited abnormally! (pid 341) mount: error 6 mounting ext3 mount: error 2 mounting none switchroot: mount failed: 22 umount /initrd/dev failed: 2 Kernel panic - not syncing: Attempting to kill init! Expected results: System boots normally with new kernel.
*** Bug 204734 has been marked as a duplicate of this bug. ***
You need to work out why the kernel is no longer seeing the device that is supposed to contain your root filesystem. Do you see the ide errors with your old kernel? Any different boot or mkinitrd options?
1. ide errors look the same with 2.6.9-5 kernel: ... ide0: I/O resource 0x1F0-0x1F7 not free. ide0: ports already in use, skipping probe Red Hat nash version 4.2.1.8 starting Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2 2 logical volume(s) in volume group "VolGroup00" now active ... 2. boot options: title Red Hat Enterprise Linux AS (2.6.9-42.0.2.ELsmp) root (hd0,0) kernel /vmlinuz-2.6.9-42.0.2.ELsmp ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.9-42.0.2.ELsmp.img title Red Hat Enterprise Linux AS (2.6.9-5.ELsmp) root (hd0,0) kernel /vmlinuz-2.6.9-5.ELsmp ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.9-5.ELsmp.img 3. under 2.6.9-5 kernel the output of 'lvm pvscan: PV /dev/sda2 VG VolGroup00 lvm2 [111.69 GB / 128.00 MB free] Total: 1 [111.69 GB] / in use: 1 [111.69 GB] / in no VG: 0 [0 ] under 2.6.9-42.0.2 (putting echo "lvm pvscan" >> $RCFILE at mkinitrd): No matching physical volumes found. I guess lvm2 label is not detected with 2.6.9-42.0.2 kernel. Attaching lvm2.log for 2.6.9-5; have to ideas where to put log file while booting under 2.6.9-42.0.2...
Created attachment 137127 [details] lvm2.log for 2.6.9-5 kernel
Actually, it seems /dev/sda2 where lvm2 label was detected by old kernel is not scanned at all with new kernel. If I add -vv option to 'lvm vgscan' at mkinitrd, and filter = [ "r|/dev/ram|", "a/.*/" ] to lvm.conf, then output under 2.6.9-5 (old) kernel: ... File-based locking enabled. Wiping cache of LVM-capable devices Wiping internal VG cache Reading all physical volumes. This may take a while... Finding all volume groups ... /dev/sda1: No label detected /dev/sda2: lvm2 label detected ... Locking /var/lock/lvm/V_VolGroup00 RB Finding volume group "VolGroup00" Found volume group "VolGroup00" using metadata type lvm2 ... but, under 2.6.9-42.0.2 kernel: Red Hat nash version 4.2.1.8 starting Setting global/locking_type to 1 Setting global/locking_dir to /var/lock/lvm Creating directory "var/lock/lvm" File-based locking enabled. Wiping cache of LVM-capable devices Wiping internal VG cache Reading all physical volumes. This may take a while... Finding all volume groups No volume groups found Volume group "VolGroup00" not found ERROR: /bin/lvm exited abnormally! (pid 341) ...
QE ack for 4.5.
Hello, I had just a similar issue. I was upgrading from RHEL4 update 3 to RHEL4 update 4. But I was using no LVM on the machine. The upgrade went just fine. But when I rebootet the machine I also got the same kernel panic: mount: error 6 mounting ext3 mount: error 2 mounting none switchroot: mount failed: 22 umount /initrd/dev failed: 2 Kernel panic - not syncing: Attempting to kill init! Although my root fs was mounted on /dev/sda3. /dev/sda3 on / type ext3 (rw) none on /proc type proc (rw) none on /sys type sysfs (rw) none on /dev/pts type devpts (rw,gid=5,mode=620) usbfs on /proc/bus/usb type usbfs (rw) /dev/sda1 on /boot type ext3 (rw) none on /dev/shm type tmpfs (rw) /dev/sda11 on /home type ext3 (rw) /dev/sda10 on /opt type ext3 (rw) /dev/sda8 on /opt/msi type ext3 (rw) /dev/sda9 on /tmp type ext3 (rw) /dev/sda7 on /usr type ext3 (rw) /dev/sda12 on /var type ext3 (rw) THere was another error message, somehow that LABEL=/ could not be found or used. Is this the same issue? Do you need more input? Is there a chance to get his fixed, or do we need to wait till 4.5? Kind regards Cornelius
SImilar problem for us - we were running 2 IBM e300 stock backup servers and when we rebooted from the latest update Red Hat Enterprise Linux ES (2.6.9-42.0.3.EL) we received a kernel panic. Also using LVM, but the problem seems unrelated - more likely a hardware incompatibility. had to revert to Red Hat Enterprise Linux ES (2.6.9-34.0.2.EL) and exclude Kernel's from update on these machines. nervous to reboot our production boxes now with this new update.
Development Management has reviewed and declined this request. You may appeal this decision by reopening this request.
More information is needed to determine where the problem is. If you are Red Hat Enterprise Linux customers and have active support entitlements, please log in to Red Hat Support https://www.redhat.com/apps/support/ for assistance.
This issue should not be closed. It is still very much a real problem and Igor's problem description is spot on. It seems that the Fedora team has tackled the same issue with lvm. http://forums.fedoraforum.org/showthread.php?t=59320