Bug 442678 - [5.2][kdump] unable to mount rootfs. Dropping to shell
Summary: [5.2][kdump] unable to mount rootfs. Dropping to shell
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kexec-tools
Version: 5.2
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Neil Horman
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2008-04-16 08:31 UTC by Qian Cai
Modified: 2009-01-20 20:59 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 600601 (view as bug list)
Environment:
Last Closed: 2009-01-20 20:59:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
lvmdump from working system (15.65 KB, application/octet-stream)
2008-04-21 10:04 UTC, Qian Cai
no flags Details
lvm vgscan -vvvv in kdump kernel (20.33 KB, text/plain)
2008-04-21 10:04 UTC, Qian Cai
no flags Details
patch to reread partition table on drives as they are detected (624 bytes, patch)
2008-04-23 15:07 UTC, Neil Horman
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2009:0105 0 normal SHIPPED_LIVE kexec-tools bug fix and enhancement update 2009-01-20 16:04:36 UTC

Description Qian Cai 2008-04-16 08:31:52 UTC
Description of problem:
I have observed two machines failed to mount rootfs in capture kernel,

Attempting to enter user-space to capture vmcore
Creating root device.
Checking root filesystem.
fsck 1.38 (30-Jun-2005)
fsck: WARNING: couldn't open /etc/fstab: No such file or directory
e2fsck 1.38 (30-Jun-2005)
fsck.ext2: while determining whether /dev/VolGroup00/LogVol00 is mounted.
fsck.ext2: while trying to open /dev/VolGroup00/LogVol00

The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>

fsck.ext2: 
Mounting root filesystem.
Trying mount -t ext3 /dev/VolGroup00/LogVol00 /sysroot
Trying mount -t ext2 /dev/VolGroup00/LogVol00 /sysroot
Trying mount -t minix /dev/VolGroup00/LogVol00 /sysroot
unable to mount rootfs. Dropping to shell
root:/> 

nec-em3.rhts.boston.redhat.com
http://rhts.redhat.com/cgi-bin/rhts/test_log.cgi?id=2701232
http://rhts.redhat.com/cgi-bin/rhts/test_log.cgi?id=2704270
http://rhts.redhat.com/cgi-bin/rhts/test_log.cgi?id=2707186

ibm-himalaya.rhts.boston.redhat.com
http://rhts.redhat.com/cgi-bin/rhts/test_log.cgi?id=2681873

Version-Release number of selected component (if applicable):
RHEL5.2-Server-20080409.0
kernel-2.6.18-89.el5
kexec-tools-1.102pre-21.el5

How reproducible:
Always

Steps to Reproduce:
1. configured kdump with crashkernel=128M@16M
2. SysRq-C

Additional information:
system info of nec-em3.rhts.boston.redhat.com
http://rhts.redhat.com/cgi-bin/rhts/test_log.cgi?id=2698045

system info of ibm-himalaya.rhts.boston.redhat.com
http://rhts.redhat.com/cgi-bin/rhts/test_log.cgi?id=2680048

Comment 1 Qian Cai 2008-04-16 09:13:48 UTC
For nec-em3.rhts.boston.redhat.com, it has the same problem on RHEL5U1 as well.
Looks like the major problem is that the second Kernel could not recognize
rootfs's partition table,

SCSI device sda: drive cache: write back
 sda: unknown partition table

The following is the output of lsmod in both the first and the second Kernel. Is
it because we have not included enough modules in kdump initrd?

=== the first Kernel ===
[root@nec-em3 ~]# lsmod
Module                  Size  Used by
autofs4                24517  2 
hidp                   23105  2 
rfcomm                 42457  0 
l2cap                  29505  10 hidp,rfcomm
bluetooth              53797  5 hidp,rfcomm,l2cap
sunrpc                144893  1 
ipv6                  258273  24 
xfrm_nalgo             13765  1 ipv6
crypto_api             11969  1 xfrm_nalgo
dm_multipath           22089  0 
video                  21193  0 
sbs                    18533  0 
backlight              10049  1 video
i2c_ec                  9025  1 sbs
button                 10705  0 
battery                13637  0 
asus_acpi              19289  0 
ac                      9157  0 
lp                     15849  0 
sg                     36189  0 
floppy                 57125  0 
ide_cd                 40033  0 
e1000                 114641  0 
cdrom                  36705  1 ide_cd
serio_raw              10692  0 
pcspkr                  7105  0 
i2c_i801               11597  0 
i2c_core               23745  2 i2c_ec,i2c_i801
parport_pc             29157  1 
parport                37513  2 lp,parport_pc
dm_snapshot            21477  0 
dm_zero                 6209  0 
dm_mirror              29125  0 
dm_mod                 61405  9 dm_multipath,dm_snapshot,dm_zero,dm_mirror
ata_piix               22341  0 
ahci                   30149  3 
libata                143997  2 ata_piix,ahci
sd_mod                 24897  5 
scsi_mod              134605  3 sg,libata,sd_mod
ext3                  123593  2 
jbd                    56553  1 ext3
uhci_hcd               25421  0 
ohci_hcd               23261  0 
ehci_hcd               33357  0 

=== the second kernel ===
root:/> lsmod
Module                  Size  Used by    Not tainted
dm_snapshot            21477  0 
dm_zero                 6209  0 
dm_mirror              29125  0 
dm_mod                 61405  3 dm_snapshot,dm_zero,dm_mirror
ext3                  123593  0 
jbd                    56553  1 ext3
ahci                   30149  0 
ata_piix               22341  0 
libata                143997  2 ahci,ata_piix
sd_mod                 24897  0 
scsi_mod              134605  2 libata,sd_mod



Comment 2 Neil Horman 2008-04-16 12:02:43 UTC
when the system reboots after a hard reset, does it properly find the root
partition?

Comment 3 Qian Cai 2008-04-16 12:25:55 UTC
Yes. After I clicked reboot bottom from RHTS webUI, the system booted correctly.

Comment 4 Neil Horman 2008-04-16 12:45:53 UTC
ok, when it drops you to a shell, can you run the mount command and check to see
if the root volume is actually mounted.  I'd like to make sure that this isn't
an exit code error in busybox (i've run into a few of those lately).  If its not
mounted, attempt to mount it manually and see if it mounts (I assume its an ext3
fs).  If you can't mount it manually I'll reserve one of the systems and see if
I can't poke about with it some.  Thanks Cai!

Comment 5 Qian Cai 2008-04-18 13:39:14 UTC
Mounting root filesystem.
Trying mount -t ext3 /dev/VolGroup00/LogVol00 /sysroot
Trying mount -t ext2 /dev/VolGroup00/LogVol00 /sysroot
Trying mount -t minix /dev/VolGroup00/LogVol00 /sysroot
unable to mount rootfs. Dropping to shell
root:/> mount
/proc on /proc type proc (rw)
/sys on /sys type sysfs (rw)
/dev on /dev type tmpfs (rw)
/dev/pts on /dev/pts type devpts (rw)
root:/> mount -t ext3 /dev/VolGroup00/LogVol00 /sysroot
mount: Mounting /dev/VolGroup00/LogVol00 on /sysroot failed: No such file or
directory
root:/> ls
bin        init       proc       scriptfns  tmp
dev        lib        root       sys        usr
etc        modules    sbin       sysroot    var 
root:/> ls /dev/
/dev/cciss/    /dev/ram3      /dev/sda2      /dev/sdb2      /dev/tty3 
/dev/console   /dev/ram4      /dev/sda3      /dev/sdb3      /dev/tty4 
/dev/ida/      /dev/ram5      /dev/sda4      /dev/sdb4      /dev/tty5 
/dev/mapper/   /dev/ram6      /dev/sda5      /dev/sdb5      /dev/tty6 
/dev/mem       /dev/ram7      /dev/sda6      /dev/sdb6      /dev/tty7 
/dev/null      /dev/ram8      /dev/sda7      /dev/sdb7      /dev/tty8 
/dev/ptmx      /dev/ram9      /dev/sda8      /dev/sdb8      /dev/tty9 
/dev/pts/      /dev/rtc       /dev/sda9      /dev/sdb9      /dev/ttyS0 
/dev/ram0      /dev/sda       /dev/sdb       /dev/shm/      /dev/ttyS1 
/dev/ram1      /dev/sda1      /dev/sdb1      /dev/systty    /dev/ttyS2 
/dev/ram10     /dev/sda10     /dev/sdb10     /dev/tty       /dev/ttyS3 
/dev/ram11     /dev/sda11     /dev/sdb11     /dev/tty0      /dev/urandom 
/dev/ram12     /dev/sda12     /dev/sdb12     /dev/tty1      /dev/zero 
/dev/ram13     /dev/sda13     /dev/sdb13     /dev/tty10 
/dev/ram14     /dev/sda14     /dev/sdb14     /dev/tty11 
/dev/ram15     /dev/sda15     /dev/sdb15     /dev/tty12 
/dev/ram2      /dev/sda16     /dev/sdb16     /dev/tty2 

I have the machine reserved. Feel free to do whatever you want.

Comment 6 Neil Horman 2008-04-18 14:13:30 UTC
Well, I see the problem.  For some reason we aren't creating the volume group
symlinks during kdump boot.  Not sure why, I'll see if I can figure that out


Comment 7 Neil Horman 2008-04-18 15:43:23 UTC
ok, so the issue seems farily straightforward, lvm isn't bothering to scan the
sd* drives  when looking for logical volume labels.  I have no idea why this
would be happening, as it seems to work just fine in the normal kernel. I just
tested the -90 kernel with the -21 kexec tools on my system that has
lvm2-2.02-12 on it and it captures fine, so I'm guessing that something in one
of the omnibus updates to lvm2 between lvm release -12 and -32 (the latter is
whats on all your test systems), is causing this failure.  Milan, can you shed
any light on why lvm2 might be behaving this way in kdump?  Thanks!

Comment 8 Milan Broz 2008-04-21 09:00:39 UTC
So problems is:
- after crash new kdump kernel is loaded and lvm vgscan doesn't found volume group
- with lvm2-2.02.12 it works but not with lvm2-2.02.32 (the same kernel?)
correct?

To analyse this we need complete logs - please run lvm scanning command with
"-vvvv" switches, also please attach lvmdump from working system.

From RHTS logs I can see that crash is in ATA code, so I expect that controller
is left is some strange state and attached device is not responding to lvm scan
later.
(Just guessing - we need logs to prove that device was really scanned for lvm
metadata.)


Comment 9 Qian Cai 2008-04-21 10:04:12 UTC
Created attachment 303128 [details]
lvmdump from working system

Comment 10 Qian Cai 2008-04-21 10:04:42 UTC
Created attachment 303129 [details]
lvm vgscan -vvvv in kdump kernel

Comment 11 Qian Cai 2008-04-21 10:08:00 UTC
I have got machine reserved, so feel free to have a look. It is currently in
kdump kernel Shell, and you can use the following command to connect,

conmux bludger.lab.boston.redhat.com/nec-em3.rhts.boston.redhat.com


Comment 12 Milan Broz 2008-04-21 11:26:16 UTC
root:/> cat /proc/partitions
major minor  #blocks  name

   8     0  117187560 sda
   8    16  117187560 sdb


For some reason, kernel see no partitions (and LVM PVs are on /dev/sda2 and
/dev/sdb1).

Kernel log says the same:

sda: Write Protect is off
SCSI device sda: drive cache: write back
 sda: unknown partition table
...

lvm tools scans only valid devices, even if I force sysfs validation option off,
it prints:

#device/dev-io.c:401         /dev/sda1: open failed: No such device or address
#filters/filter.c:106         /dev/sda1: Skipping: open failed

So I see it like some hw problem...


Comment 13 Neil Horman 2008-04-21 12:35:25 UTC
Thank you milan.  That certainly seems to be the case, that we have some
hardware inconsistency here.  However, Cai, I could have sworn that we've at
least used nec-em3 to do soem kdump testing in the past, and given that  you're
crashing it with sysrq-c, the disk controllers should be in as reliable a state
as we can ever really expect.  That said, do you know if these systems worked
previously with kdump (Again, with nec-em3, I'm almost certain I've done kdumps
there before)?  If so, do you know the last good kernel that worked with these?
 If possible, can you bisect from there to find the last good kernel we had?  If
it helps you any, the changelog indicates that several  sata, scsi, and libata
fixes went in between -85 and -89.  My guess is that  -84 works just fine.  If
you could test it, I'd appreciate it (I'm trying to get on the machine to test
myself right now, but it seems DOA).  If -84 works, we'll flag this as a
regression and see about backing out whatever change triggered it, or working
around that.

Comment 14 Qian Cai 2008-04-23 09:52:20 UTC
It has the same problem on both -53.el5 and -84.el5. I got the machine reserved,
feel free to have a look.

Comment 15 Neil Horman 2008-04-23 12:33:40 UTC
Hmm, we must have been using a network dump target then.  Milan is right, I've
tried several dumps now, and while the results are inconsistent (sometimes we
see some partitions in /dev/sdb, /dev/sda always reads as an incorrect partition
table).  That certainly says hardware error to me.  Let me ask garzik if he's
seen this behavior before.


Comment 16 Neil Horman 2008-04-23 13:36:17 UTC
Ok, interesting data point here.  If I re-read the partition table manually with
hdparm -z immediately after I drop to a shell on nec-em3, then I find all my
partitions and can rebuild my volume.  Its like the sata controller is
presenting drives to the system before they are ready to accept read/write requests.

Comment 17 Neil Horman 2008-04-23 15:07:06 UTC
Created attachment 303503 [details]
patch to reread partition table on drives as they are detected

Ok, so I'm able to make this work if I add this patch in.  It basically just
forces a re-read of the partition table for every block device as its created. 
Its seems from my poking about, that for some reason the initial read of the
partition table happens before a drive is fully initialize (or otherwise ready
to accept read requests), but after its presence is registered with the kernel.
I'd be willing to fix it with this patch, as it seems a 'safe' thing to do, but
I don't think I should have to do it.  I rather feel like this patch is just a
band aid on the underlying problem, which is that userspace is provided with
access to block devices prior to them truly being accessible (at least with
this controller).  Jeff, do you have any insight on why the controller might be
doing this, and if there is anything we can do about it?  Or do we just need to
force  a re-read of the parition table sometime after the drives are visible in
sysfs?	If the answer is the latter, whats the minimum 'safe' time we need to
wait before we can be guaranteed drives are actaully accessible?

Comment 18 Qian Cai 2008-08-07 15:31:31 UTC
I have still seen this on RHEL-5.2 GA on several machines,

http://rhts.redhat.com/cgi-bin/rhts/test_log.cgi?id=3853016

It failed to find rootfs in LVM.

Scanning logical volumes
  Reading all physical volumes.  This may take a while...
  No volume groups found
  No volume groups found
Activating logical volumes
  No volume groups found
  No volume groups found

Comment 19 Neil Horman 2008-08-07 17:23:14 UTC
I expect you would as I didn't check the patch in, I'm waiting on commentary from jgarzik to comment on why we would need to re-read the partition table.  Jeff?

Comment 28 errata-xmlrpc 2009-01-20 20:59:21 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2009-0105.html


Note You need to log in before you can comment on or make changes to this bug.