Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 882407

Summary: Reconnecting to a guest after "forceful" end of the client machine leads to crash.
Product: Red Hat Enterprise Linux 6 Reporter: Marian Krcmarik <mkrcmari>
Component: spice-serverAssignee: Uri Lublin <uril>
Status: CLOSED DUPLICATE QA Contact: Desktop QE <desktop-qa-list>
Severity: high Docs Contact:
Priority: unspecified    
Version: 6.4CC: acathrow, cfergeau, dblechte, dyasny, mkenneth
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-12-02 10:50:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Marian Krcmarik 2012-11-30 21:35:04 UTC
Description of problem:
`monitors_config != NULL' condition is hit once user connects to a guest machine, "forcefully" ends client machine with spice session opened (i.e. client machine crash, suspend, losing connection) and reconnecting from different client machine or after restoring/resuming the client machine.

(/usr/libexec/qemu-kvm:14515): SpiceWorker-CRITICAL **: red_worker.c:10937:red_push_monitors_config: condition `monitors_config != NULL' failed
Thread 5 (Thread 0x7f4a47167700 (LWP 14518)):
#0  0x00007f4a4e46e7b7 in ioctl () from /lib64/libc.so.6
#1  0x00007f4a50a7e64a in kvm_run (env=0x7f4a51403da0) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1015
#2  0x00007f4a50a7eaf9 in kvm_cpu_exec (env=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1743
#3  0x00007f4a50a7f9dd in kvm_main_loop_cpu (_env=0x7f4a51403da0) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2004
#4  ap_main_loop (_env=0x7f4a51403da0) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2060
#5  0x00007f4a503b5851 in start_thread () from /lib64/libpthread.so.0
#6  0x00007f4a4e47667d in clone () from /lib64/libc.so.6
Thread 4 (Thread 0x7f4a455aa700 (LWP 14519)):
#0  0x00007f4a503bc54d in read () from /lib64/libpthread.so.0
#1  0x00007f4a4ec12f30 in read () at /usr/include/bits/unistd.h:45
#2  spice_backtrace_gstack () at backtrace.c:100
#3  0x00007f4a4ec1b060 in spice_logv (log_domain=0x7f4a4ec972b4 "SpiceWorker", log_level=SPICE_LOG_LEVEL_CRITICAL, strloc=0x7f4a4ec975b1 "red_worker.c:10937", function=0x7f4a4ec99310 "red_push_monitors_config", format=0x7f4a4ec9759b "condition `%s' failed", args=0x7f4a455a99c0) at log.c:108
#4  0x00007f4a4ec1b19a in spice_log (log_domain=<value optimized out>, log_level=<value optimized out>, strloc=<value optimized out>, function=<value optimized out>, format=<value optimized out>) at log.c:123
#5  0x00007f4a4ebf8a57 in on_new_display_channel_client (opaque=<value optimized out>, payload=0x7f49f021d0b0) at red_worker.c:9494
#6  handle_new_display_channel (opaque=<value optimized out>, payload=0x7f49f021d0b0) at red_worker.c:10395
#7  handle_dev_display_connect (opaque=<value optimized out>, payload=0x7f49f021d0b0) at red_worker.c:11270
#8  0x00007f4a4ebd8ca7 in dispatcher_handle_single_read (dispatcher=0x7f4a51938dc8) at dispatcher.c:139
#9  dispatcher_handle_recv_read (dispatcher=0x7f4a51938dc8) at dispatcher.c:162
#10 0x00007f4a4ebf99de in red_worker_main (arg=<value optimized out>) at red_worker.c:11835
#11 0x00007f4a503b5851 in start_thread () from /lib64/libpthread.so.0
#12 0x00007f4a4e47667d in clone () from /lib64/libc.so.6
Thread 3 (Thread 0x7f4a44aea700 (LWP 14520)):
#0  0x00007f4a4e46cfc3 in poll () from /lib64/libc.so.6
#1  0x00007f4a4ebf967e in red_worker_main (arg=<value optimized out>) at red_worker.c:11805
#2  0x00007f4a503b5851 in start_thread () from /lib64/libpthread.so.0
#3  0x00007f4a4e47667d in clone () from /lib64/libc.so.6
Thread 2 (Thread 0x7f4a48d0c700 (LWP 15536)):
#0  0x00007f4a503b97bb in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f4a50a9a7a7 in cond_timedwait (unused=<value optimized out>) at posix-aio-compat.c:102
#2  aio_thread (unused=<value optimized out>) at posix-aio-compat.c:329
#3  0x00007f4a503b5851 in start_thread () from /lib64/libpthread.so.0
#4  0x00007f4a4e47667d in clone () from /lib64/libc.so.6
Thread 1 (Thread 0x7f4a509d0940 (LWP 14515)):
#0  0x00007f4a4e46f263 in select () from /lib64/libc.so.6
#1  0x00007f4a50a5a940 in main_loop_wait (timeout=1000) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:3968
#2  0x00007f4a50a7cb9a in kvm_main_loop () at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2244
#3  0x00007f4a50a5d738 in main_loop (argc=25, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4187
#4  main (argc=25, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6525

Version-Release number of selected component (if applicable):
spice-server-0.12.0-6.el6.x86_64
spice-server-debuginfo-0.12.0-6.el6.x86_64
qemu-kvm-0.12.1.2-2.337.el6.x86_64
qemu-kvm-debuginfo-0.12.1.2-2.337.el6.x86_6
virt-viewer-0.5.2-16.el6.x86_64
spice-gtk-0.14-4.el6.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Connect to the guest using remote viewer client.
2. Forcefully poweroff the client machine with spice session opened.
3. Reconnect from different client machine or from the same machine after booting it up.
  
Actual results:
SpiceWorker-CRITICAL **: red_worker.c:10937:red_push_monitors_config: condition `monitors_config != NULL' failed

Expected results:
Successful reconenction

Additional info:
QEMU cli:
usr/libexec/qemu-kvm -cpu SandyBridge -enable-kvm -m 1024 -smp 1,sockeres=1,threads=1 -spice port=5900,disable-ticketing,seamless-migration=on -vga qxl -global qxl-vga.vram_size=67108864 -device qxl /dev/mapper/76b69f23--d04c--4065--80f0--42dc300e9c67-084eb0bb--90cf--49f4--834c--e848badbb8e1 -monitor stdio -device virtio-serial-pci,id=virtio_serial_pci0 -chardev spicevmc,id=devvdagent,name=vdagent -device virtserialport,chardev=devvdagent,name=com.redhat.spice.0,id=vdagent,bus=virtio_serial_pci0.0

Comment 2 Uri Lublin 2012-12-02 10:50:25 UTC

*** This bug has been marked as a duplicate of bug 868807 ***