Bug 1130089
Summary: | Possible deadlock when the domain is destroyed on destination during migration | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Ján Tomko <jtomko> |
Component: | libvirt | Assignee: | Ján Tomko <jtomko> |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 7.0 | CC: | dyuan, lagarcia, mzhan, rbalakri, ydu, zhwang, zpeng |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | libvirt-1.2.8-1.el7 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-03-05 07:42:33 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Ján Tomko
2014-08-14 10:13:26 UTC
Fixed upstream by: commit f0f9eed843edf1339bb7b078d98e985b11f5f240 Author: Sam Bobroff <sam.bobroff.com> AuthorDate: 2014-08-12 12:54:42 +1000 Commit: Ján Tomko <jtomko> CommitDate: 2014-08-14 12:12:42 +0200 qemu: Tidy up job handling during live migration During a QEMU live migration several warning messages about job handling could be written to syslog on the destination host: "entering monitor without asking for a nested job is dangerous" The messages are written because the job handling during migration uses hard coded asyncJob values in several places that are incorrect. This patch passes the required asyncJob value around and prevents the warnings as well as any issues that the warnings may be referring to. https://bugzilla.redhat.com/show_bug.cgi?id=1130089 Signed-off-by: Sam Bobroff <sam.bobroff.com> Signed-off-by: Ján Tomko <jtomko> git describe: v1.2.7-58-gf0f9eed Verify this issue with build libvirt-1.2.8-1.el7.x86_64: Verify steps: 1. Set up migration env with nfs 2. On source host, prepare a running guest (rhel6-guest), run following command: [root@rhel7-a /]# virsh migrate --live rhel6-guest qemu+ssh://destEnv/system --verbose 3. Before migration done, running following commands on destination host: [root@rhel7-b /]# while true; do virsh destroy rhel6-guest ; done Actual Result: 1. Following error will show on source host: Migration: [ 75 %]error: operation failed: migration job: unexpectedly failed 2. Check libvirtd status both on source and destination hosts with following command: #service libvirtd status Both are good, not deadlock occurs. 3. Check libvirtd.log on destination host: No warning about "entering monitor without asking for a nested job is dangerous" are found. Reproduce this issue with build libvirt-1.1.1-29.el7_0.1 + following patch: --- src/qemu/qemu_process.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 5b598be..91e9b55 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -4290,6 +4290,8 @@ int qemuProcessStart(virConnectPtr conn, goto cleanup; } qemuDomainObjEnterMonitor(driver, vm); + VIR_ERROR("Sleeping for three seconds..."); + sleep(3); if (vm->def->memballoon && vm->def->memballoon->period) qemuMonitorSetMemoryStatsPeriod(priv->mon, vm->def->memballoon->period); if (qemuMonitorSetBalloon(priv->mon, cur_balloon) < 0) { -- Reproduce steps: 1. With above patch, create rpms for build libvirt-1.1.1-29.el7_0.1, and install all these rpms on both source and destination host. 2. Open log feature for libvirtd in /etc/libvirt/libvirtd.conf on destination host with: # grep ^log /etc/libvirt/libvirtd.conf log_level = 1 log_outputs="1:file:/var/log/libvirt/libvirtd.log" 3. Restart the libvirtd service on destination host. 4. Setup a migration environment with nfs 5. Prepare a running guest named "rhel6" on source host 6. On Destination host, run following command: # while true; do grep "Sleeping for three seconds" /var/log/libvirt/libvirtd.log && sleep 1 && virsh destroy rhel6 ; done 7. Migrate the guest named "rhel6" from source to destination: #virsh migrate --live rhel6 qemu+ssh://destinationHost/system --verbose Actual Result: On Destination host: # while true; do grep "Sleeping for three seconds" /var/log/libvirt/libvirtd.log && virsh destroy rhel6 ; done 2014-10-09 06:47:18.293+0000: 2707: error : qemuProcessStart:4219 : Sleeping for three seconds... error: Failed to destroy domain rhel6 error: internal error: received hangup / error event on socket Check libvirtd service: The libvirtd service deadlock, and will hang up when issue command "virsh list" Verify this issue with build libvirt-1.2.8-4.el7.x86_64 + following patch: --- src/qemu/qemu_process.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 9294619..a57d857 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -5200,6 +5200,8 @@ int qemuProcessAttach(virConnectPtr conn ATTRIBUTE_UNUSED, VIR_DOMAIN_RUNNING_UNPAUSED); if (vm->def->memballoon && vm->def->memballoon->period) { qemuDomainObjEnterMonitor(driver, vm); + VIR_ERROR("Sleeping for three seconds..."); + sleep(3); qemuMonitorSetMemoryStatsPeriod(priv->mon, vm->def->memballoon->period); qemuDomainObjExitMonitor(driver, vm); -- Verify Steps: 1. With above patch, create rpms for build libvirt-1.2.8-4.el7.x86_64, and install all these rpms on both source and destination host. 2. Open log feature for libvirtd in /etc/libvirt/libvirtd.conf on destination host with: # grep ^log /etc/libvirt/libvirtd.conf log_level = 1 log_outputs="1:file:/var/log/libvirt/libvirtd.log" 3. Restart the libvirtd service on destination host. 4. Setup a migration environment with nfs 5. Prepare a running guest named "rhel6" on source host 6. On Destination host, run following command: # while true; do grep "Sleeping for three seconds" /var/log/libvirt/libvirtd.log && sleep 1 && virsh destroy rhel6 ; done 7. Migrate the guest named "rhel6" from source to destination: #virsh migrate --live rhel6 qemu+ssh://destinationHost/system --verbose Actual Result: On Destination host: # while true; do grep "Sleeping for three seconds" /var/log/libvirt/libvirtd.log && virsh destroy rhel6 ; done Check libvirtd service: The libvirtd service works well. I can reproduce the bug follow the reproducer in comment 0, the reproduce steps as following 1.Recompile the libvirt packet with the following patch on the target host --- src/qemu/qemu_process.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 9294619..a57d857 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -5200,6 +5200,8 @@ int qemuProcessAttach(virConnectPtr conn ATTRIBUTE_UNUSED, VIR_DOMAIN_RUNNING_UNPAUSED); if (vm->def->memballoon && vm->def->memballoon->period) { qemuDomainObjEnterMonitor(driver, vm); + VIR_ERROR("Sleeping for three seconds..."); + sleep(3); qemuMonitorSetMemoryStatsPeriod(priv->mon, vm->def->memballoon->period); qemuDomainObjExitMonitor(driver, vm); -- 2.setting log_level and log_output in /etc/libvirt/libvirtd.conf on destination host with: # grep ^log /etc/libvirt/libvirtd.conf log_level = 1 log_outputs="1:file:/var/log/libvirt/libvirtd.log" #service libvirtd restart 3.Prepare the nfs migration env in both source and target host 4.Run the following command in the target host # while true; do grep "Sleeping for three seconds" /var/log/libvirt/libvirtd.log && sleep 1 && virsh destroy rhel7.0 ; done 5.Migrate the guest from the source to the target and run the virsh list command in the target host, we could see libvirtd hang there # while true; do grep "Sleeping for three seconds" /var/log/libvirt/libvirtd.log && sleep 1 && virsh destroy rhel7.0 ; done 2014-12-09 05:46:13.921+0000: 27190: error : qemuProcessStart:4092 : Sleeping for three seconds... # virsh list --all ^C Verify this bug with libvirt-1.2.8-10.el7.x86_64 steps 1.compile the libvirtd code with the following patch on the target host int qemuProcessStart(virConnectPtr conn, virQEMUDriverPtr driver, -- if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) goto cleanup; VIR_ERROR("Sleeping for three seconds..."); sleep(3); if (vm->def->memballoon && vm->def->memballoon->period) qemuMonitorSetMemoryStatsPeriod(priv->mon, vm->def->memballoon->period); if (qemuMonitorSetBalloon(priv->mon, cur_balloon) < 0) { qemuDomainObjExitMonitor(driver, vm); goto cleanup; } 2.Setting log_level and log_output in /etc/libvirt/libvirtd.conf on destination host with: <target># grep ^log /etc/libvirt/libvirtd.conf log_level = 1 log_outputs="1:file:/var/log/libvirt/libvirtd.log" <target>#service libvirtd restart 3.Prepare the nfs migration env in both source and target host 4.Run the following command in the target host <target1># while true; do grep "Sleeping for three seconds" /var/log/libvirt/libvirtd.log && sleep 1 && virsh destroy rhel7.0 ; done 5.Migrate the guest from the source to the target and run the virsh list command in the target host, the guest will fail to migrate in the source host and libvirtd on target host didn't show deadlock. <source> # virsh migrate --live rhel7.0 qemu+ssh://$target_ip/system --verbose root.4.165's password: error: internal error: early end of file from monitor: possible problem: qemu: terminating on signal 15 from pid 21314 <target1> # while true; do grep "Sleeping for three seconds" /var/log/libvirt/libvirtd.log && sleep 1 && virsh destroy rhel7.0 ; done 2014-12-09 09:25:58.340+0000: 21357: error : qemuProcessStart:4782 : Sleeping for three seconds... error: Failed to destroy domain rhel7.0 error: Requested operation is not valid: domain is not running 2014-12-09 09:25:58.340+0000: 21357: error : qemuProcessStart:4782 : Sleeping for three seconds... error: failed to get domain 'rhel7.0' error: Domain not found: no domain with matching name 'rhel7.0' <target2># virsh list --all Id Name State ---------------------------------------------------- 5.cancel the script in target1 , then re-migrate the guest rhel7.0 from the source to the target , the guest could be migrated successfully According to the upper steps , mark this bug verifed Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-0323.html |