Bug 1340976
Summary: | Sometimes guest OS paused after managedsave&start. | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Fangge Jin <fjin> | ||||||
Component: | libvirt | Assignee: | Jiri Denemark <jdenemar> | ||||||
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | ||||||
Severity: | medium | Docs Contact: | |||||||
Priority: | medium | ||||||||
Version: | 7.2 | CC: | dyuan, mzhan, rbalakri, yanqzhan, yanyang, zpeng | ||||||
Target Milestone: | rc | Keywords: | Reopened | ||||||
Target Release: | --- | ||||||||
Hardware: | x86_64 | ||||||||
OS: | Linux | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | libvirt-2.0.0-1.el7 | Doc Type: | If docs needed, set a value | ||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2016-11-03 18:46:02 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Attachments: |
|
Created attachment 1163016 [details]
libvirtd log when guest resumes successfully
Can you please re-test this with current libvirt build? Should be already fixed. Thanks. *** This bug has been marked as a duplicate of bug 1265902 *** Heh, I didn't really want to close this as a duplicate; bug 1265902 is related since the reason is the same, but the path is a bit different and was fixed later... This bug is already fixed upstream by v1.2.21-90-g2c4ba8b: commit 2c4ba8b4f3d20c0cb14e8ee91f087d98a0406802 Author: Jiri Denemark <jdenemar> AuthorDate: Wed Nov 11 18:02:23 2015 +0100 Commit: Jiri Denemark <jdenemar> CommitDate: Thu Nov 19 09:41:23 2015 +0100 qemu: Use -incoming defer for migrations Traditionally, we pass incoming migration URI on QEMU command line, which has some drawbacks. Depending on the URI QEMU may initialize its migration state immediately without giving us a chance to set any additional migration parameters (this applies mainly for fd: URIs). For some URIs the monitor may be completely blocked from the beginning until migration is finished, which means we may be stuck in qmp_capabilities command without being able to send any QMP commands. QEMU solved this by introducing "defer" parameter for -incoming command line option. This will tell QEMU to prepare for an incoming migration while the actual incoming URI is sent using migrate-incoming QMP command. Before calling this command we can normally talk to the monitor and even set any migration parameters which will be honored by the incoming migration. Signed-off-by: Jiri Denemark <jdenemar> Reproduce this bug with libvirt-1.2.17-13.el7_2.5 Steps to reproduce: 1.create an empty file to save ping messages. #echo > unreach.txt 2.Use a loop command to do “managedsave--> start-->ping guest”. #for((i=0;i<50;i++));do (echo $i >> unreach.txt; virsh managedsave testvm; virsh start testvm; (ping 192.168.122.46 >> unreach.txt &); sleep 4; kill `pgrep ping`); done 3.Check unreach.txt to find whether there is unreachable result: # cat unreach.txt|grep Unreachable From 192.168.122.1 icmp_seq=1 Destination Host Unreachable From 192.168.122.1 icmp_seq=2 Destination Host Unreachable From 192.168.122.1 icmp_seq=3 Destination Host Unreachable From 192.168.122.1 icmp_seq=4 Destination Host Unreachable From 192.168.122.1 icmp_seq=1 Destination Host Unreachable From 192.168.122.1 icmp_seq=2 Destination Host Unreachable From 192.168.122.1 icmp_seq=3 Destination Host Unreachable From 192.168.122.1 icmp_seq=4 Destination Host Unreachable From 192.168.122.1 icmp_seq=1 Destination Host Unreachable ...... ------------------- Verify this bug with libvirt-2.0.0-1.el7.x86_64 Steps to verify: 1.#echo > unreach.txt 2.#for((i=0;i<50;i++));do (echo $i >> unreach.txt; virsh managedsave testvm; virsh start testvm; (ping 192.168.122.207 >> unreach.txt &); sleep 4; kill `pgrep ping`); done 3.Check unreach.txt, no unreachable results: # cat unreach.txt|grep Unreachable (nothing output) Since the result is as expected, mark this bug verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2016-2577.html |
Created attachment 1163015 [details] libvirtd log Description of problem: Do managedsave&start for a guest, sometimes guest OS doesn't resume actually, but virsh command shows that guest is running. Version-Release number of selected component (if applicable): libvirt-1.2.17-13.el7_2.5.x86_64 qemu-kvm-rhev-2.3.0-31.el7_2.14.x86_64 How reproducible: Seldom Steps to Reproduce: 1.Do managedsave and start # virsh managedsave avocado-vt-vm1 Domain avocado-vt-vm1 state saved by libvirt # virsh start avocado-vt-vm1 Domain avocado-vt-vm1 started 2. # virsh list Id Name State ---------------------------------------------------- 24 avocado-vt-vm1 running 3.Ping guest # ping 192.168.122.7 PING 192.168.122.7 (192.168.122.7) 56(84) bytes of data. From 192.168.122.1 icmp_seq=10 Destination Host Unreachable From 192.168.122.1 icmp_seq=11 Destination Host Unreachable From 192.168.122.1 icmp_seq=12 Destination Host Unreachable From 192.168.122.1 icmp_seq=13 Destination Host Unreachable ^C --- 192.168.122.7 ping statistics --- 14 packets transmitted, 0 received, +4 errors, 100% packet loss, time 13001ms 4.Suspend the guest and resume # virsh suspend avocado-vt-vm1; virsh resume avocado-vt-vm1 Domain avocado-vt-vm1 suspended Domain avocado-vt-vm1 resumed 5.Ping guest again # ping 192.168.122.7 PING 192.168.122.7 (192.168.122.7) 56(84) bytes of data. 64 bytes from 192.168.122.7: icmp_seq=1 ttl=64 time=0.114 ms 64 bytes from 192.168.122.7: icmp_seq=2 ttl=64 time=0.117 ms ^C --- 192.168.122.7 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.114/0.115/0.117/0.010 ms Actual results: Guest doesn't resume actually after managedsave and start. Expected results: Guest resume successfully after managedsave and start. Additional info: I suspect this issue also exists in RHEL7.3, bug it's hard to reproduce.