RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1163647 - virt-viewer will core dump with -r option via ssh when destroy the guest
Summary: virt-viewer will core dump with -r option via ssh when destroy the guest
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: virt-viewer
Version: 7.1
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: rc
: 7.2
Assignee: Christophe Fergeau
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1181288
Blocks: 1287462
TreeView+ depends on / blocked
 
Reported: 2014-11-13 08:36 UTC by CongDong
Modified: 2015-12-02 07:38 UTC (History)
7 users (show)

Fixed In Version: virt-viewer-2.0-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1287462 (view as bug list)
Environment:
Last Closed: 2015-11-19 07:35:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
virt-viewer debug out put (7.23 KB, text/plain)
2014-11-13 08:37 UTC, CongDong
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2211 0 normal SHIPPED_LIVE virt-viewer, spice-gtk, and libgovirt bug fix and enhancement update 2015-11-19 08:27:40 UTC

Description CongDong 2014-11-13 08:36:42 UTC
Description of problem:
use virt-viewer connect a guest on remote host via ssh with 
-r/--reconnect option, if destroy the guest , virt-viewer will
core dump

Version-Release number of selected component (if applicable):
virt-viewer-0.6.0-11.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. install a spice guest on host A
# virsh dumpxml vm1
...
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
...
    <graphics type='spice' port='5900' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
...
    <video>
      <model type='qxl' ram='65536' vram='65536' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
...
2. start the guest and connect it on host B
# virt-viewer -c qemu+ssh://$host_A_ip/system vm1 -r
3. destroy the guest on host A

Actual results:
Step2, virt-viewer can connect the guest
Step3, after destroy the guest, virt-viewer will core dump
#0  virt_viewer_app_get_nth_window (self=self@entry=0x0, nth=0)
    at virt-viewer-app.c:783
#1  0x0000000000413215 in display_show_hint (display=0xf0e9f0, 
    pspec=<optimized out>, user_data=<optimized out>) at virt-viewer-app.c:913
#2  0x00007fa05698dfd8 in g_closure_invoke () from /lib64/libgobject-2.0.so.0
#3  0x00007fa0569a00ad in signal_emit_unlocked_R ()
   from /lib64/libgobject-2.0.so.0
#4  0x00007fa0569a7e32 in g_signal_emit_valist ()
   from /lib64/libgobject-2.0.so.0
#5  0x00007fa0569a80ef in g_signal_emit () from /lib64/libgobject-2.0.so.0
#6  0x00007fa056992675 in g_object_dispatch_properties_changed ()
   from /lib64/libgobject-2.0.so.0
#7  0x00007fa056994d59 in g_object_notify () from /lib64/libgobject-2.0.so.0
#8  0x00000000004232f7 in update_display_ready (self=0xf0e9f0)
    at virt-viewer-display-spice.c:158
#9  0x00007fa05698dfd8 in g_closure_invoke () from /lib64/libgobject-2.0.so.0
#10 0x00007fa0569a00ad in signal_emit_unlocked_R ()
   from /lib64/libgobject-2.0.so.0
#11 0x00007fa0569a7e32 in g_signal_emit_valist ()
   from /lib64/libgobject-2.0.so.0
#12 0x00007fa0569a80ef in g_signal_emit () from /lib64/libgobject-2.0.so.0
#13 0x00007fa056992675 in g_object_dispatch_properties_changed ()
   from /lib64/libgobject-2.0.so.0
---Type <return> to continue, or q <return> to quit---
#14 0x00007fa056994d59 in g_object_notify () from /lib64/libgobject-2.0.so.0
#15 0x00007fa058995541 in channel_destroy ()
   from /lib64/libspice-client-gtk-3.0.so.4
#16 0x00007fa05698dfd8 in g_closure_invoke () from /lib64/libgobject-2.0.so.0
#17 0x00007fa0569a00ad in signal_emit_unlocked_R ()
   from /lib64/libgobject-2.0.so.0
#18 0x00007fa0569a7e32 in g_signal_emit_valist ()
   from /lib64/libgobject-2.0.so.0
#19 0x00007fa0569a80ef in g_signal_emit () from /lib64/libgobject-2.0.so.0
#20 0x00007fa057faf617 in spice_channel_dispose ()
   from /lib64/libspice-client-glib-2.0.so.8
#21 0x00007fa056992c68 in g_object_unref () from /lib64/libgobject-2.0.so.0
#22 0x00007fa057fadd25 in spice_channel_delayed_unref ()
   from /lib64/libspice-client-glib-2.0.so.8
#23 0x00007fa0566949ba in g_main_context_dispatch ()
   from /lib64/libglib-2.0.so.0
#24 0x00007fa056694d08 in g_main_context_iterate.isra.24 ()
   from /lib64/libglib-2.0.so.0
#25 0x00007fa056694fda in g_main_loop_run () from /lib64/libglib-2.0.so.0
#26 0x00007fa05844013d in gtk_main () from /lib64/libgtk-3.so.0
#27 0x000000000040fbda in main (argc=1, argv=0x7fff6734d118)
    at virt-viewer-main.c:116


Expected results:
virt-viewer should wait for the guest start again.

Additional info:
if use a vnc guest, the result is a little different,
when destroy the guest , virt-viewer won't core dump,
but virt-viewer will crash when start the guest again

#0  0x000000000041c659 in virt_viewer_window_get_notebook (self=0x0) at virt-viewer-window.c:1157
#1  0x00000000004132e6 in display_show_hint (display=<optimized out>, pspec=<optimized out>, user_data=<optimized out>) at virt-viewer-app.c:925
#2  0x00007f76effccfd8 in g_closure_invoke () from /lib64/libgobject-2.0.so.0
#3  0x00007f76effdf0ad in signal_emit_unlocked_R () from /lib64/libgobject-2.0.so.0
#4  0x00007f76effe6e32 in g_signal_emit_valist () from /lib64/libgobject-2.0.so.0
#5  0x00007f76effe70ef in g_signal_emit () from /lib64/libgobject-2.0.so.0
#6  0x00007f76effd1675 in g_object_dispatch_properties_changed () from /lib64/libgobject-2.0.so.0
#7  0x00007f76effd3d59 in g_object_notify () from /lib64/libgobject-2.0.so.0
#8  0x00007f76effccfd8 in g_closure_invoke () from /lib64/libgobject-2.0.so.0
#9  0x00007f76effdf0ad in signal_emit_unlocked_R () from /lib64/libgobject-2.0.so.0
#10 0x00007f76effe6e32 in g_signal_emit_valist () from /lib64/libgobject-2.0.so.0
#11 0x00007f76effe70ef in g_signal_emit () from /lib64/libgobject-2.0.so.0
#12 0x00007f76f35ef380 in on_initialized () from /lib64/libgtk-vnc-2.0.so.0
#13 0x00007f76effcd207 in _g_closure_invoke_va () from /lib64/libgobject-2.0.so.0
#14 0x00007f76effe6487 in g_signal_emit_valist () from /lib64/libgobject-2.0.so.0
#15 0x00007f76effe70ef in g_signal_emit () from /lib64/libgobject-2.0.so.0
#16 0x00007f76f33d2735 in do_vnc_connection_emit_main_context () from /lib64/libgvnc-1.0.so.0
#17 0x00007f76efcd39ba in g_main_context_dispatch () from /lib64/libglib-2.0.so.0
#18 0x00007f76efcd3d08 in g_main_context_iterate.isra.24 () from /lib64/libglib-2.0.so.0
#19 0x00007f76efcd3fda in g_main_loop_run () from /lib64/libglib-2.0.so.0
#20 0x00007f76f1a7f13d in gtk_main () from /lib64/libgtk-3.so.0
#21 0x000000000040fbda in main (argc=1, argv=0x7fffa37c33d8) at virt-viewer-main.c:116

If this is not the same proble with spice guest, I'll file a new one for it.

Comment 1 CongDong 2014-11-13 08:37:15 UTC
Created attachment 957030 [details]
virt-viewer debug out put

Comment 3 Fabiano Fidêncio 2014-11-14 10:59:12 UTC
CongDong,
What do you mean, exactly, by "destroy the guest on host A"? Delete the VM on host A?

Comment 4 Fabiano Fidêncio 2014-11-14 16:30:18 UTC
(In reply to Fabiano Fidêncio from comment #3)
> CongDong,
> What do you mean, exactly, by "destroy the guest on host A"? Delete the VM
> on host A?

Both Christophe and Jonathon told me that "destroy the guest on host A" means "virsh destroy".

Comment 5 Christophe Fergeau 2014-11-14 20:54:29 UTC
Seeing something somehow similar (crashes when restarting the guest after destroying it) with virt-viewer.git which is caused by https://git.fedorahosted.org/cgit/virt-viewer.git/commit/?id=f03285ba8da4a40a8058c3259788293124cc2803 . Maybe this one is related as this patch is in the rhel7 package

Comment 8 Fabiano Fidêncio 2015-04-21 19:09:17 UTC
Will be fixed with rebase to virt-viewer 2.0

Comment 13 zhoujunqin 2015-06-16 08:39:59 UTC
I can reproduce this issue with package:
virt-viewer-0.6.0-12.el7.x86_64

Then try to verify this bug with new build:
virt-viewer-2.0-1.el7.x86_64

Steps:
Scenario 1: Test with spice guest

1. Prepare a running spice guest on host A

# virsh dumpxml rhel6.6-snapshot0610
...
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
...
    <graphics type='spice' port='5900' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
      <image compression='off'/>
    </graphics>
...
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>

2. On host B, use virt-viewer connect to guest on host A.
# virt-viewer -c qemu+ssh://$host_A_ip/system rhel6.6-snapshot0610  -r

Result: Connect to guest on host A successfully.

3. destroy the guest on host A

# virsh destroy rhel6.6-snapshot0610 
Domain rhel6.6-snapshot0610 destroyed

Result: Virt-viewer window keeps on host B and showing: "Waiting for guest domain to re-start", no crash.

4. Start guest on host A again:

# virsh start rhel6.6-snapshot0610 
Domain rhel6.6-snapshot0610 started

Result: Virt-viewer window keeeps and reconnect to guest automatically, showing guest boot up process with no crash.

Scenario 2: Test with VNC guest

1. Prepare a running vnc guest on host A
...

# virsh dumpxml rhel6.6-snapshot0610-clone
    <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
...
    <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>

2. On host B, use virt-viewer connect to guest on host A.
# virt-viewer -c qemu+ssh://$host_A_ip/system rhel6.6-snapshot0610-clone  -r

Result: Connect to guest on host A successfully.

3. destroy the guest on host A
# virsh destroy rhel6.6-snapshot0610-clone 
Domain rhel6.6-snapshot0610 destroyed

Result: Virt-viewer window keeps on host B and showing: "Waiting for guest domain to re-start", no crash.

4. Start guest on host A again:

# virsh start rhel6.6-snapshot0610-clone 
Domain rhel6.6-snapshot0610-clone started

Result: Virt-viewer window keeeps and reconnect to guest automatically, showing guest boot up process with no crash.


Seen above steps, move this bug from ON_QA to VERIFIED.

Comment 15 errata-xmlrpc 2015-11-19 07:35:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2211.html


Note You need to log in before you can comment on or make changes to this bug.