Description of problem: In the RHEL5.5 x86_64 platform, the RHEL6-i386 guest would hang there after failed migration. Version-Release number of selected component (if applicable): kernel-xen-devel-2.6.18-212.el5 kernel-xen-2.6.18-212.el5 xen-3.0.3-115.el5 xen-devel-3.0.3-115.el5 xen-libs-3.0.3-115.el5 How reproducible: always Steps to Reproduce: 1.Change xend configration to enable migration and setup NFS storage for migration. 2.Copy HV domain images to shared NFS storage server. 3.Mount the NFS image directory on host-A. 4.Create the VM guest on source host-A: [host]#xm create $vm.cfg 5.In host-A, you can connect to guest via vncviewer successfully [host]#vncviewer 127.0.0.1:$port_number 6.Don't mount the NFS image directory on host-B 7.Execute command migrate from host-A to host-B. [host]#xm migrate $domid $ip_host-B 8.In host-A, connect to guest via vncviewer. Actual results: 1. After step 8, the guest doesn't keep its previous state and hangs there after migration failed. Expected results: 1. After step 8, the guest should keep its previous state after migration failed. Additional info: Please see attachment: xend.log in host-A and host-B xm dmesg in host-A configure file of pv
Created attachment 440327 [details] xm dmesg file in host-A xm dmesg file in host-A
Created attachment 440328 [details] xend.log in host-A xend.log in host-A
Created attachment 440329 [details] xend.log in host-B
Created attachment 440330 [details] config file of hv guest
Created attachment 440338 [details] screendump of guest
When using rhel6(0901.0) x86_64 hvm guest, guest will also hang after migration failed.
Yang, what is the expected behavior? I suppose you would have liked the guest to resume running on host A. Also, do you know if this is a regression, and whether it affects PV guests, or an HVM guest without PV drivers? Based on the answers to these questions, I'm afraid this could be quite complex and not post-beta material.
Reporting the results of testing: - an HVM guest without PV drivers just works - a PV guest crashes This happens because we do not support suspend cancellation. Marking as duplicate---but we'll definitely have to leave it for 5.7. *** This bug has been marked as a duplicate of bug 497080 ***