Bug 608964 - vbd device of HVM guest lost after local migration
Summary: vbd device of HVM guest lost after local migration
Status: CLOSED DUPLICATE of bug 622501
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: xen
Version: 5.6
Hardware: All
OS: Linux
low
high
Target Milestone: rc
: ---
Assignee: Miroslav Rezanina
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Keywords:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-06-29 03:38 UTC by Lei Wang
Modified: 2010-11-09 13:24 UTC (History)
4 users (show)

(edit)
Clone Of:
(edit)
Last Closed: 2010-06-30 11:37:01 UTC


Attachments (Terms of Use)
rhel-5.5-64-hvm.conf (599 bytes, application/octet-stream)
2010-06-29 03:38 UTC, Lei Wang
no flags Details
xend.log (787.57 KB, text/plain)
2010-06-29 03:39 UTC, Lei Wang
no flags Details
xm_dmesg.log (16.00 KB, text/plain)
2010-06-29 03:40 UTC, Lei Wang
no flags Details
hvm-reboot-error.png (115.77 KB, image/png)
2010-06-29 03:41 UTC, Lei Wang
no flags Details

Description Lei Wang 2010-06-29 03:38:30 UTC
Created attachment 427556 [details]
rhel-5.5-64-hvm.conf

Description of problem:
Vbd device of HVM guest lost after local migration, and the following reboot
will fail due to no bootable device found. Remote migration do not have this issue.

Version-Release number of selected component (if applicable):
xen-3.0.3-113.el5
kernel-xen-2.6.18-203.el5
kernel-xen-devel-2.6.18-203.el5

How reproducible:
Always

Steps to Reproduce:
1.xm create rhel-5.5-64-hvm.conf(config file as attachment)
2.check vbd device:
  xm list -l rhel-5.5-64-hvm
  please notice the device/vbd segment,like:
    (device
        (vbd
            (backend 0)
            (dev hda:disk)
            (uname
                file:/data/xen-autotest/client/tests/xen/images/RHEL-Server-5.5-64-hvm.raw
            )
            (mode w)
        )
    )
3.local migration:
  xm migrate -l rhel-5.5-64-hvm localhost
4.check vbd device:
  xm list -l rhel-5.5-64-hvm
  this time, please notice that there's no device/vbd segment like listed in step2, it was lost.
5.reboot the hvm guest

ERROR messages from xend.log:
[...]
[2010-06-29 11:21:53 xend.XendDomainInfo 3435] DEBUG (XendDomainInfo:947) XendDomainInfo.completeRestore done
[2010-06-29 11:21:53 xend 3435] DEBUG (DevController:160) Waiting for devices vif.
[2010-06-29 11:21:53 xend 3435] DEBUG (DevController:166) Waiting for 0.
[2010-06-29 11:21:53 xend.XendDomainInfo 3435] DEBUG (XendDomainInfo:1257) XendDomainInfo.handleShutdownWatch
[2010-06-29 11:21:53 xend 3435] DEBUG (DevController:538) hotplugStatusCallback /local/domain/0/backend/vif/2/0/hotplug-status.
[2010-06-29 11:21:53 xend 3435] DEBUG (DevController:552) hotplugStatusCallback 1.
[2010-06-29 11:21:53 xend 3435] DEBUG (DevController:160) Waiting for devices usb.
[2010-06-29 11:21:53 xend 3435] DEBUG (DevController:160) Waiting for devices vbd.
[2010-06-29 11:21:53 xend 3435] DEBUG (DevController:166) Waiting for 768.
[2010-06-29 11:21:53 xend 3435] DEBUG (DevController:538) hotplugStatusCallback /local/domain/0/backend/vbd/2/768/hotplug-status.
[2010-06-29 11:21:53 xend 3435] DEBUG (DevController:552) hotplugStatusCallback 5.
[2010-06-29 11:21:53 xend 3435] ERROR (XendCheckpoint:295) Device 768 (vbd) could not be connected.
File /var/run/xen-autotest/images/RHEL-Server-5.5-64-hvm.raw is loopback-mounted through /dev/loop0,
which is mounted in a guest domain,
and so cannot be mounted now.
Traceback (most recent call last):
  File "/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py", line 293, in restore
    dominfo.waitForDevices() # Wait for backends to set up
  File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 2421, in waitForDevices
    self.waitForDevices_(c)
  File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 1440, in waitForDevices_
    return self.getDeviceController(deviceClass).waitForDevices()
  File "/usr/lib64/python2.4/site-packages/xen/xend/server/DevController.py", line 162, in waitForDevices
    return map(self.waitForDevice, self.deviceIDs())
  File "/usr/lib64/python2.4/site-packages/xen/xend/server/DevController.py", line 196, in waitForDevice
    raise VmError("Device %s (%s) could not be connected.\n%s" %
VmError: Device 768 (vbd) could not be connected.
File /var/run/xen-autotest/images/RHEL-Server-5.5-64-hvm.raw is loopback-mounted through /dev/loop0,
which is mounted in a guest domain,
and so cannot be mounted now.
[...]
 
Actual results:
Reboot failed with error(also could see attachment):
Booting from Hard Disk...
Boot from Hard Disk failed: could not read the boot disk
FATAL: No bootable device.

Expected results:
No vbd device lost and Hvm guest should reboot successfully.

Additional info:
Tried with remote migration, do not have this issue.

Comment 1 Lei Wang 2010-06-29 03:39:33 UTC
Created attachment 427557 [details]
xend.log

Comment 2 Lei Wang 2010-06-29 03:40:15 UTC
Created attachment 427558 [details]
xm_dmesg.log

Comment 3 Lei Wang 2010-06-29 03:41:02 UTC
Created attachment 427559 [details]
hvm-reboot-error.png

Comment 4 Miroslav Rezanina 2010-06-30 07:12:05 UTC
This behavior is logical. Xen do not allow to start guest with file disk used by another guest. When we migrate guest, original copy is destroyed after new copy is created. And that's the problem. Disk is assigned to source copy so it cannot be assigned to destination. So no disk description is stored for new guest. When we do reboot, no disk is used so we cannot boot.

Comment 7 Miroslav Rezanina 2010-06-30 11:37:01 UTC
As changes to support file based vbd local migration would be to invasive and risky, we do not handle this situation.

Comment 8 Lei Wang 2010-08-13 05:39:29 UTC
This bug was fixed by the patch from bug 622501.

I tried with RHEL5.5-32 guest on AMD platform, and the guest booted without error(FATAL: No bootable device.).


Note You need to log in before you can comment on or make changes to this bug.