RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1323092 - remote-viewer hangs with "Connected to graphic server"
Summary: remote-viewer hangs with "Connected to graphic server"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: spice-gtk
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Default Assignee for SPICE Bugs
QA Contact: SPICE QE bug list
URL:
Whiteboard:
: 1275673 (view as bug list)
Depends On: 1322920
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-01 09:22 UTC by Andrei Stepanov
Modified: 2016-11-04 01:15 UTC (History)
22 users (show)

Fixed In Version: spice-gtk-0.31-3.el7
Doc Type: Bug Fix
Doc Text:
Clone Of: 1322920
Environment:
Last Closed: 2016-11-04 01:15:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
remote-viewer debug (25.84 KB, text/plain)
2016-04-01 09:23 UTC, Andrei Stepanov
no flags Details
backtrace at "Connected to graphic server" (4.81 KB, text/plain)
2016-04-01 15:03 UTC, Andrei Stepanov
no flags Details
backtrace at "Connected to graphic server" #2 (5.92 KB, text/plain)
2016-04-01 15:14 UTC, Andrei Stepanov
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2229 0 normal SHIPPED_LIVE virt-viewer, libgovirt, spice-gtk, and usbredir bug fix and enhancement update 2016-11-03 13:26:58 UTC

Description Andrei Stepanov 2016-04-01 09:22:41 UTC
The same for virt-viewer-2.0-7.el7.x86_64


+++ This bug was initially created as a clone of Bug #1322920 +++

remote-viewer hangs with "Connected to graphic server" during connection in fullscreen mode to VM which has +1 more active display.

Client is Windows7SP1.
rpm -qf /usr/share/spice/virt-viewer-x64.msi 
rhevm-spice-client-x64-msi-3.6-6.el6.noarch
rpm -q --changelog rhevm-spice-client-x64-msi
* Mon Jan 04 2016 Uri Lublin <uril> - 3.6-6
- mingw-virt-viewer 2.0.8
- mingw-spice-gtk 0.26-10
- mingw-libusbx 1.0.20-1
- spice-usbdk-win 1.0-10
- Send requests to usbdk asynchronously (rhbz#1144043)

Host is Rhel7.2.z
kernel-3.10.0-327.10.1.el7.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.10.x86_64

How reproducible: 50% (each second connection)

Steps to Reproduce:
1. Connect to VM with remote-viewer.
2. Go to remote-viewer->View->Displays and active one more display than the actual monitors connected at client.
3. Close remote-viewer.
4. Connect to the same VM in fullscreen mode.

remote-viewer hangs with "Connected to graphic server"

--- Additional comment from Eduardo Lima (Etrunko) on 2016-03-31 16:03:20 EDT ---

Can you confirm if this is related to bug 1287139?

--- Additional comment from Fabiano Fidêncio on 2016-03-31 17:38:06 EDT ---

Or, can you confirm you're setting the right amount of monitors in your console options? It seems like another memory related bug.

--- Additional comment from Andrei Stepanov on 2016-04-01 04:22:33 EDT ---

Fabiano Fidêncio, VM has 4 displays in RHEVM settings.
Eduardo Lima (Etrunko), the bug is not related to the bug 1287139. (See screenshot what I am talking about).

--- Additional comment from Andrei Stepanov on 2016-04-01 04:23 EDT ---



--- Additional comment from Fabiano Fidêncio on 2016-04-01 04:59:56 EDT ---

And I guess you're not able to provide --debug --spice-debug outputs, right? If you can, please attach to this bug.
Also, can you reproduce the same issue on RHEL7.3 or 6.8?

--- Additional comment from Andrei Stepanov on 2016-04-01 05:08:21 EDT ---

I cannot provide debug due to : https://bugzilla.redhat.com/show_bug.cgi?id=1322929

Yes, I can reproduce it using virt-viewer-2.0-7.el7.x86_64

--- Additional comment from Fabiano Fidêncio on 2016-04-01 05:14:00 EDT ---

(In reply to Andrei Stepanov from comment #6)
> I cannot provide debug due to :
> https://bugzilla.redhat.com/show_bug.cgi?id=1322929
> 
> Yes, I can reproduce it using virt-viewer-2.0-7.el7.x86_64

So the problem is not Windows specific. A log from virt-viewer 2.0-7 would be enough for dealing with the problem. Also, please, don't forget to clone this bug there.

Comment 1 Andrei Stepanov 2016-04-01 09:23:29 UTC
Created attachment 1142485 [details]
remote-viewer debug

Comment 3 Fabiano Fidêncio 2016-04-01 13:29:23 UTC
I can't reproduce it at all. Please, can you get me the gdb backtrace of the client when it's hanging on "Connected to graphic server"?

Comment 4 Andrei Stepanov 2016-04-01 15:03:52 UTC
Created attachment 1142618 [details]
backtrace at "Connected to graphic server"

Comment 5 Andrei Stepanov 2016-04-01 15:14:28 UTC
Created attachment 1142619 [details]
backtrace at "Connected to graphic server" #2

Comment 6 Pavel Grunt 2016-04-01 15:22:37 UTC
I see that smartcards are involved, it would be nice to mention it...

Comment 7 Andrei Stepanov 2016-04-01 15:28:01 UTC
I do not have connected smartcard at client.

Comment 8 Fabiano Fidêncio 2016-04-05 12:12:11 UTC
Andrei,

As the first thing, this is what I got from the hypervisor:
 9425 pts/0    S+     0:00 grep --color=auto kvm
 9456 ?        Sl   401:46 /usr/libexec/qemu-kvm -name rhel_68_32 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu SandyBridge -m size=4194304k,slots=16,maxmem=20971520k -realtime mlock=off -smp 2,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0-1,mem=4096 -uuid 7c649f8f-b565-4113-aa10-7c673a3de539 -smbios type=1,manufacturer=Red Hat,product=RHEV Hypervisor,version=7.2-10.0.el7ev,serial=34353736-3132-5A43-3135-333430314B33,uuid=7c649f8f-b565-4113-aa10-7c673a3de539 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-rhel_68_32/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2016-04-01T12:22:29,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x6.0x7 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x6.0x2 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x6 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x6.0x1 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -device usb-ccid,id=ccid0 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/00000001-0001-0001-0001-0000000002ae/a57f4682-87be-4902-92ee-0223ae3537be/images/5144be3f-f218-4e13-a3b4-4e22a8c774c6/7bcd44da-9236-4e9d-8571-fc0f8a2b2551,if=none,id=drive-virtio-disk0,format=raw,serial=5144be3f-f218-4e13-a3b4-4e22a8c774c6,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=34,id=hostnet0,vhost=on,vhostfd=35 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:65,bus=pci.0,addr=0x3 -chardev spicevmc,id=charsmartcard0,name=smartcard -device ccid-card-passthru,chardev=charsmartcard0,id=smartcard0,bus=ccid0.0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/7c649f8f-b565-4113-aa10-7c673a3de539.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/7c649f8f-b565-4113-aa10-7c673a3de539.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5906,tls-port=5907,addr=10.34.73.138,x509-dir=/etc/pki/vdsm/libvirt-spice,seamless-migration=on -device qxl-vga,id=video0,ram_size=268435456,vram_size=33554432,vgamem_mb=16,bus=pci.0,addr=0x2 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0 -chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1 -chardev spicevmc,id=charredir2,name=usbredir -device usb-redir,chardev=charredir2,id=redir2 -chardev spicevmc,id=charredir3,name=usbredir -device usb-

The important part of this is the amount of ram and vgamem set for the RHEL-6 guest, which I'm copying here:
-device qxl-vga,id=video0,ram_size=268435456,vram_size=33554432,vgamem_mb=16,bus=pci.0,addr=0x2

According to https://bugzilla.redhat.com/show_bug.cgi?id=1275539#c12 ... vgamem is wrong. It must be 16mb*number_of_heads for RHEL-6 guests.

So, considering your hypervisor is up to date, that's a RHEVM bug and not a spice one.

I'm going to mention this comment on all other related bug reports.

Comment 9 Andrei Stepanov 2016-04-05 12:28:25 UTC
vdsm-4.17.23.2-1.el7ev.noarch
* Sun Mar 27 2016 Eyal Edri <eedri> - 4.17.23.3

rhevm-userportal-3.6.4.1-0.1.el6.noarch
* Tue Mar 22 2016 Gil Shinar <gshinar> - 3.6.4


As you can see they are pretty fresh packages.

Comment 11 Andrei Stepanov 2016-04-11 17:01:35 UTC
vdsm-4.17.23.2-1.el7ev.noarch
* Sun Mar 27 2016 Eyal Edri <eedri> - 4.17.23.3
rhevm-userportal-3.6.4.1-0.1.el6.noarch
* Tue Mar 22 2016 Gil Shinar <gshinar> - 3.6.4 

For above packages I always get ram_size=268435456,vram_size=8388608,vgamem_mb=64

According to https://bugzilla.redhat.com/show_bug.cgi?id=1275539#c12 

  Should be:  vgamem = 16 MB * number_of_heads
  Is: vgamem_mb=64
  
  Should be: ram = 4 * vgamem
  Is: ram_size=268435456
  64 * 1024 * 1024 * 4 = 268435456 bytes

  Should be: vram = 8 MB for other guest systems (I have RHEL6.8)
  Is: vram_size=8388608
  8 * 1024 * 1024 = 8388608 bytes

So, all values are correct.

Which are all correct values.

Comment 12 Fabiano Fidêncio 2016-04-11 17:39:56 UTC
So, where was the problem? Why did I get a different value when getting the info from the hypervisor? Did you update something?

Comment 13 Andrei Stepanov 2016-04-11 17:48:11 UTC
We didn't update hosts/engine.
I can reproduce the bug using current environment.

Comment 14 Milan Zamazal 2016-04-12 09:18:46 UTC
If you get ram_size=268435456,vram_size=8388608,vgamem_mb=64 with RHEV 3.6, it's correct and should be working. Values ram_size=268435456,vram_size=33554432,vgamem_mb=16 used to be set by older Engine versions -- it was a RHEV bug that has been fixed in 3.6. Please note you need up-to-date versions of both Engine and Vdsm for the fix and you must restart the VM in order to apply the new video RAM settings after you upgraded from RHEV < 3.6 (either Engine or hypervisor).

Comment 15 Fabiano Fidêncio 2016-04-27 12:54:07 UTC
(In reply to Andrei Stepanov from comment #13)
> We didn't update hosts/engine.
> I can reproduce the bug using current environment.

I can also reproduce the bug and this time, for sure, it's not related to the amount of memory set to vgamem.

Comment 16 Christophe Fergeau 2016-05-17 14:34:16 UTC
Is this reproducible with the latest spice-gtk/virt-viewer version in el7.3? (spice-gtk-0.31-2.el7 - virt-viewer-2.0-7.el7). There was a significant rebase of spice-gtk

Comment 17 Fabiano Fidêncio 2016-05-17 16:21:35 UTC
(In reply to Christophe Fergeau from comment #16)
> Is this reproducible with the latest spice-gtk/virt-viewer version in el7.3?
> (spice-gtk-0.31-2.el7 - virt-viewer-2.0-7.el7). There was a significant
> rebase of spice-gtk

It's reproducible even with upstream virt-viewer on f24 (which means, spice-gtk 0.31).

I've managed to reproduce it easily on astepano's installed VMs. Please, ping him to get access to his VMs and also to the hypervisor.

Another important info is that it could be reproducible with different guests (RHEL-6, RHEL-7) and with different clients (rhevm-3.6, rhel-7.2 and upstream).

Comment 18 Andrei Stepanov 2016-05-17 17:18:57 UTC
I cannot reproduce it on client RHEL 7 nightly (17052016) with:

virt-viewer-2.0-7.el7.x86_64
spice-gtk3-0.31-2.el7.x86_64

But, I am not sure what is a cause.

When I downgraded back to 
spice-gtk3      x86_64      0.26-8.el7
spice-glib      x86_64      0.26-8.el7
I managed reproduce the bug only once (tried ~ 30 times).


Server is:

rpm -q --changelog spice-server-0.12.4-15.el7.x86_64 | head
* Wed Sep 23 2015 Frediano Ziglio <fziglio> 0.12.4-15
- CVE-2015-5260 CVE-2015-5261 fixed various security flaws
  Resolves: rhbz#1267134

I would suggest to set priority to low.

Comment 19 Christophe Fergeau 2016-05-25 16:25:59 UTC
Should be fixed by https://lists.freedesktop.org/archives/spice-devel/2016-May/029534.html

Comment 20 Pavel Grunt 2016-06-08 08:57:02 UTC
*** Bug 1275673 has been marked as a duplicate of this bug. ***

Comment 22 Radek Duda 2016-07-29 15:37:09 UTC
The bug is still there.
Client - rhel 7.3
spice-gtk-0.31-4.el7.x86_64
spice-glib-0.31-4.el7.x86_64
virt-viewer-2.0-11.el7.x86_64

Connecting on guest using rhev-m  4.0.2.1-0.1.el7ev
Guest:
 /usr/libexec/qemu-kvm -name rhel7.2-64_rd -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu SandyBridge -m size=2048000k,slots=16,maxmem=4294967296k -realtime mlock=off -smp 2,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0-1,mem=2000 -uuid cf1683ed-2bb9-4369-ae0d-94985def68f3 -smbios type=1,manufacturer=Red Hat,product=RHEV Hypervisor,version=7.2-9.el7,serial=4C4C4544-0053-4710-8059-C8C04F37354A,uuid=cf1683ed-2bb9-4369-ae0d-94985def68f3 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-rhel7.2-64_rd/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2016-07-26T14:41:40,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x9.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x9 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x9.0x2 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x9.0x1 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -device usb-ccid,id=ccid0 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/00000001-0001-0001-0001-0000000000f5/ae0bc21f-bbac-4641-b518-b77053b8dc70/images/c339e376-d22b-4cbe-b2c0-ea36e50da330/b2d3971f-0858-479a-a1cb-1478c7fc8d11,if=none,id=drive-virtio-disk0,format=qcow2,serial=c339e376-d22b-4cbe-b2c0-ea36e50da330,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=40,id=hostnet0,vhost=on,vhostfd=41 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:54,bus=pci.0,addr=0x3 -chardev spicevmc,id=charsmartcard0,name=smartcard -device ccid-card-passthru,chardev=charsmartcard0,id=smartcard0,bus=ccid0.0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/cf1683ed-2bb9-4369-ae0d-94985def68f3.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/cf1683ed-2bb9-4369-ae0d-94985def68f3.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5906,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=268435456,vram_size=134217728,vgamem_mb=64,bus=pci.0,addr=0x2 -chardev spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0 -chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1 -chardev spicevmc,id=charredir2,name=usbredir -device usb-redir,chardev=charredir2,id=redir2 -chardev spicevmc,id=charredir3,name=usbredir -device usb-redir,chardev=charredir3,id=redir3 -incoming tcp:[::]:49152 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on

Vgamem amount should be sufficient.

Also reproducible on Windows 7 guest.
I can provide any debug data if needed

Comment 23 Radek Duda 2016-08-01 10:21:07 UTC
I forgot to check monitor mapping for these VMs, which were left in my setting file. So without any monitor mapping config the fix works as it should.
I am moving the bug back to ON_QA state.

Comment 26 errata-xmlrpc 2016-11-04 01:15:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2229.html


Note You need to log in before you can comment on or make changes to this bug.