RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 677884 - During reverse migration from target to source host, break migration will cause subsequent operation fail
Summary: During reverse migration from target to source host, break migration will cau...
Keywords:
Status: CLOSED DUPLICATE of bug 682953
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.1
Hardware: x86_64
OS: Linux
medium
low
Target Milestone: rc
: ---
Assignee: Daniel Veillard
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-02-16 08:07 UTC by weizhang
Modified: 2011-06-10 02:19 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-06-10 02:19:52 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description weizhang 2011-02-16 08:07:12 UTC
Description of problem:
When doing reverse migration from target host to source host, press Ctrl+c to break the migration, and then do shutdown/destroy domain on target host, it will report an error:
error: Failed to shutdown domain graph
error: Timed out during operation: cannot acquire state change lock


Version-Release number of selected component (if applicable):
libvirt-0.8.7-6.el6.x86_64
qemu-kvm-0.12.1.2-2.144.el6.x86_64
kernel-2.6.32-113.el6.x86_64

How reproducible:
always

Steps to Reproduce:
1. Mount nfs on both side and install a guest on source host
2. Start the guest and do migration
# virsh migrate --live guest qemu+ssh://{target ip}/system
3. Migration back from target host to source host and During this process press Ctrl+c
4. Shutdown/Destroy this guest on target host
# virsh shutdown guest
error: Failed to shutdown domain guest
error: Timed out during operation: cannot acquire state change lock
  
Actual results:
has error as above shows

Expected results:
shutdown/destroy success with no error after break

Additional info:
I do backtrace of all thread after break and get
Thread 2 (Thread 0x7f1783f90700 (LWP 15184)):
#0  0x0000003690cdde87 in ioctl () from /lib64/libc.so.6
#1  0x000000000042ce8f in kvm_run (env=0x1079250) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:928
#2  0x000000000042d319 in kvm_cpu_exec (env=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1664
#3  0x000000000042e05f in kvm_main_loop_cpu (_env=0x1079250) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1932
#4  ap_main_loop (_env=0x1079250) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1982
#5  0x00000036918077e1 in start_thread () from /lib64/libpthread.so.0
#6  0x0000003690ce5dcd in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7f17841c0940 (LWP 15161)):
#0  0x0000003690cde923 in select () from /lib64/libc.so.6
#1  0x000000000040b8d0 in main_loop_wait (timeout=1000) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4417
#2  0x000000000042b29a in kvm_main_loop () at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2165
#3  0x000000000040ef0f in main_loop (argc=<value optimized out>, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4634
#4  main (argc=<value optimized out>, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6848

and I don't know if it is usable

Comment 2 Eric Blake 2011-02-16 14:59:56 UTC
That backtrace appears to be from qemu.  Can you instead provide the backtrace from libvirtd?

Also, just to make sure I'm clear on the reproduction case:

machine A is both the control (where you are running virsh and typing Ctrl-C) and the destination of the reverse migration, and machine B is the source of the reverse migration (here, calling it the reverse migration, because you started out by migrating from A to B).  Correct?

Comment 3 weizhang 2011-02-17 05:46:42 UTC
Hi Eric,

I think it must be clear by describing as following:
1. install guest on host A
2. do live migration from host A to B
3. do live migration from host B to A with the same guest and then press Ctrl-C before migration finished on host B
4. do virsh shutdown/destroy guest on host B


the libvirtd backtrace is as follow, hope can help you
(gdb) thread apply all backtrace
Thread 7 (Thread 0x7f89bbe4e700 (LWP 9061)):
#0  0x0000003690cdc6c3 in poll () from /lib64/libc.so.6
#1  0x000000000041895d in virEventRunOnce () at event.c:584
#2  0x000000000041b2d9 in qemudOneLoop () at libvirtd.c:2238
#3  0x000000000041b797 in qemudRunLoop (opaque=0xdc7640) at libvirtd.c:2348
#4  0x00000036918077e1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003690ce5dcd in clone () from /lib64/libc.so.6

Thread 6 (Thread 0x7f89bb44d700 (LWP 9062)):
#0  0x000000369180b44c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x0000003e22a40796 in virCondWait (c=<value optimized out>, m=<value optimized out>) at util/threads-pthread.c:108
#2  0x000000000041c445 in qemudWorker (data=0x7f89b40008c0) at libvirtd.c:1561
#3  0x00000036918077e1 in start_thread () from /lib64/libpthread.so.0
#4  0x0000003690ce5dcd in clone () from /lib64/libc.so.6

Thread 5 (Thread 0x7f89baa4c700 (LWP 9063)):
#0  0x000000369180b44c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x0000003e22a40796 in virCondWait (c=<value optimized out>, m=<value optimized out>) at util/threads-pthread.c:108
#2  0x000000000041c445 in qemudWorker (data=0x7f89b40008d8) at libvirtd.c:1561
#3  0x00000036918077e1 in start_thread () from /lib64/libpthread.so.0
#4  0x0000003690ce5dcd in clone () from /lib64/libc.so.6

Thread 4 (Thread 0x7f89ba04b700 (LWP 9064)):
#0  0x000000369180b44c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x0000003e22a40796 in virCondWait (c=<value optimized out>, m=<value optimized out>) at util/threads-pthread.c:108
#2  0x000000000041c445 in qemudWorker (data=0x7f89b40008f0) at libvirtd.c:1561
#3  0x00000036918077e1 in start_thread () from /lib64/libpthread.so.0
#4  0x0000003690ce5dcd in clone () from /lib64/libc.so.6

Thread 3 (Thread 0x7f89b964a700 (LWP 9065)):
#0  0x000000369180b44c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x0000003e22a40796 in virCondWait (c=<value optimized out>, m=<value optimized out>) at util/threads-pthread.c:108
#2  0x000000000041c445 in qemudWorker (data=0x7f89b4000908) at libvirtd.c:1561
#3  0x00000036918077e1 in start_thread () from /lib64/libpthread.so.0
#4  0x0000003690ce5dcd in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x7f89b8c49700 (LWP 9066)):
#0  0x000000369180b44c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x0000003e22a40796 in virCondWait (c=<value optimized out>, m=<value optimized out>) at util/threads-pthread.c:108
#2  0x000000000041c445 in qemudWorker (data=0x7f89b4000920) at libvirtd.c:1561
#3  0x00000036918077e1 in start_thread () from /lib64/libpthread.so.0
#4  0x0000003690ce5dcd in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7f89c2b09800 (LWP 9059)):
#0  0x000000369180803d in pthread_join () from /lib64/libpthread.so.0
#1  0x000000000041f968 in main (argc=<value optimized out>, argv=<value optimized out>) at libvirtd.c:3333

Comment 4 RHEL Program Management 2011-04-04 02:07:19 UTC
Since RHEL 6.1 External Beta has begun, and this bug remains
unresolved, it has been rejected as it is not proposed as
exception or blocker.

Red Hat invites you to ask your support representative to
propose this request, if appropriate and relevant, in the
next release of Red Hat Enterprise Linux.

Comment 5 Dave Allan 2011-06-10 02:19:52 UTC

*** This bug has been marked as a duplicate of bug 682953 ***


Note You need to log in before you can comment on or make changes to this bug.