Bug 247265 - race between loading xenblk.ko and scanning for LVM partitions etc.
race between loading xenblk.ko and scanning for LVM partitions etc.
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel-xen (Show other bugs)
5.0
All Linux
low Severity low
: ---
: ---
Assigned To: Richard W.M. Jones
Martin Jenner
:
: 230561 241793 (view as bug list)
Depends On: 241793
Blocks: 248462
  Show dependency treegraph
 
Reported: 2007-07-06 10:17 EDT by Ian Campbell
Modified: 2010-12-22 04:04 EST (History)
6 users (show)

See Also:
Fixed In Version: RHBA-2007-0959
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-11-07 14:55:12 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Upstream patch against 2.6.20-2925.10 (3.22 KB, patch)
2007-07-13 05:05 EDT, Richard W.M. Jones
no flags Details | Diff
Patch against RHEL 5.1 2.6.18-32.el5 kernel (3.01 KB, text/x-patch)
2007-07-13 05:44 EDT, Richard W.M. Jones
no flags Details

  None (edit)
Description Ian Campbell 2007-07-06 10:17:18 EDT
Description of problem:
When the xen block frontend driver is built as a module the module load is only
synchronous up to the point where the frontend and the backend become connected
rather than when the disk is added.

This means that there can be a race on boot between loading the module and
loading the dm-* modules and doing the scan for LVM physical volumes (all in the
initrd). In the failure case the disk is not present until after the scan for
physical volumes is complete.

Version-Release number of selected component (if applicable): 2.6.18-8.1.6.EL
Also applies to RHEL4u5 2.6.9-55.0.2.EL

Upstream fix is:
http://xenbits.xensource.com/linux-2.6.18-xen.hg?rev/11483a00c017
http://xenbits.xensource.com/kernels/rhel4x.hg?rev/156e3eaca552

How reproducible:
Not precisely determined. Seems to be related to high load in domain 0 or in
multiple other domains implying delays in scheduling the backend driver. 

Actual results:
  Loading xenblk.ko module
  Registering block device major 202
  xvda:Loading dm-mod.ko module
  <6>device-mapper: ioctl: 4.11.0-ioctl (2006-09-14) initialised:
dm-devel@redhat.com
  Loading dm-mirror.ko module
  Loading dm-zero.ko module
  Loading dm-snapshot.ko module
  Making device-mapper control node
  Scanning logical volumes
  Reading all physical volumes. This may take a while...
  xvda1 xvda2
  No volume groups found
  Activating logical volumes
  Volume group "VolGroup00" not found
  Creating root device.
  Mounting root filesystem.
  mount: could not find filesystem '/dev/root'
  Setting up other filesystems.
  Setting up new root fs
  setuproot: moving /dev failed: No such file or directory
  no fstab.sys, mounting internal defaults
  setuproot: error mounting /proc: No such file or directory
  setuproot: error mounting /sys: No such file or directory
  Switching to new root and running init.
  unmounting old /dev
  unmounting old /proc
  unmounting old /sys
  switchroot: mount failed: No such file or directory
  Kernel panic - not syncing: Attempted to kill init!

Expected results:
  Registering block device major 202
  xvda: xvda1 xvda2
  Loading dm-mod.ko module
  device-mapper: ioctl: 4.11.0-ioctl (2006-09-14) initialised: dm-devel@redhat.com
  Loading dm-mirror.ko module
  Loading dm-zero.ko module
  Loading dm-snapshot.ko module
  Making device-mapper control node
  Scanning logical volumes
  Reading all physical volumes. This may take a while...
  Found volume group "VolGroup00" using metadata type lvm2
  Activating logical volumes
  2 logical volume(s) in volume group "VolGroup00" now active

Additional info:
Comment 1 RHEL Product and Program Management 2007-07-06 12:23:37 EDT
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.
Comment 2 Richard W.M. Jones 2007-07-07 03:49:58 EDT
AFAICS, this only solves half the problem.  Waiting for the disk to be
added gets you so far.  You also need to wait for the partitions on
the disk to be scanned.

See 241793 comment 15 (and onwards).
Comment 3 Richard W.M. Jones 2007-07-07 03:50:30 EDT
Bug 241793 comment 15
(Bugzilla _really_ needs a preview feature).
Comment 4 Ian Campbell 2007-07-09 04:39:12 EDT
The proposed patch causes the module load to wait until add_disk() has returned.
In 2.6.18 at least this calls down to rescan_partitions in a synchronous manner.

(add_disk->register_disk->blkdev_get->do_open->rescan_partitions).
Comment 6 Gerd Hoffmann 2007-07-12 09:46:05 EDT
trapped into this issue too while trying to make rhel5 boot with pv-on-hvm
drivers.  Fixed it this way:

--- /sbin/mkinitrd.kraxel       2007-07-11 13:25:06.000000000 +0200
+++ /sbin/mkinitrd      2007-07-12 12:58:16.000000000 +0200
@@ -1239,9 +1239,11 @@
 unset usb_mounted
 
 if [ -n "$scsi" ]; then
-    emit "echo Waiting for driver initialization."
+    emit "echo Waiting for driver initialization (scsi)."
     emit "stabilized --hash --interval 250 /proc/scsi/scsi"
 fi
+emit "echo Waiting for driver initialization (partitions)."
+emit "stabilized --hash --interval 250 /proc/partitions"
 
 
 if [ -n "$vg_list" ]; then
Comment 7 Richard W.M. Jones 2007-07-12 10:30:06 EDT
I have now tested the Xen upstream fix and it works.
Comment 9 Richard W.M. Jones 2007-07-13 05:05:09 EDT
Created attachment 159140 [details]
Upstream patch against 2.6.20-2925.10
Comment 10 Richard W.M. Jones 2007-07-13 05:44:30 EDT
Created attachment 159143 [details]
Patch against RHEL 5.1 2.6.18-32.el5 kernel

This patch applies cleanly against the RHEL 5.1 2.6.18-32.el5 kernel
(you will need %patch... -p2).
Comment 15 Red Hat Bugzilla 2007-07-24 20:53:50 EDT
change QA contact
Comment 17 Jarod Wilson 2007-07-27 09:49:03 EDT
I'd ping dzickus about this one, last comment from him about the patch was a request for a -p1 repost, 
which Rik did on 7/20, but it doesn't yet appear to have been pulled into the kernel build.
Comment 18 Don Zickus 2007-07-31 17:14:44 EDT
in 2.6.18-37.el5
You can download this test kernel from http://people.redhat.com/dzickus/el5
Comment 19 Chris Lalancette 2007-08-14 09:36:54 EDT
*** Bug 241793 has been marked as a duplicate of this bug. ***
Comment 20 Chris Lalancette 2007-08-14 11:33:43 EDT
*** Bug 230561 has been marked as a duplicate of this bug. ***
Comment 22 Issue Tracker 2007-08-29 07:03:07 EDT
The feedback from Fujitsu:
------------------------------------
We tested this issue on PV domain with RHEL5.1 snapshot 2.
The result is fine.
But we also tested it on HVM domain, the result is no good.
We understand currently system-volume is qemu-dm device and so we not get
this problem. We try to system-volume be VBD device, and this case we get
same this problem.
How about system-volume be VBD device in Red Hat?

On HVM domain which system-volume is VBD device, it is difficult to fix
this problem at driver side.
We think some waiting logic is needed for the initial sequence of Linux
OS.

Thank you.                         Nishi


This event sent from IssueTracker by mmatsuya 
 issue 125723
Comment 28 errata-xmlrpc 2007-11-07 14:55:12 EST
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2007-0959.html

Note You need to log in before you can comment on or make changes to this bug.