Bug 1030544 - Spice console unable to connect to the graphic server
Summary: Spice console unable to connect to the graphic server
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: oVirt
Classification: Retired
Component: ovirt-engine-webadmin
Version: 3.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 3.4.0
Assignee: Nobody's working on this, feel free to take it
QA Contact: bugs@ovirt.org
URL:
Whiteboard: virt
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-11-14 16:32 UTC by Felipe Diefenbach
Modified: 2014-02-14 14:35 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-02-14 14:35:45 UTC
oVirt Team: ---


Attachments (Terms of Use)
logs do libvirt, engine, vdsm (7.03 MB, application/gzip)
2013-11-14 16:32 UTC, Felipe Diefenbach
no flags Details

Description Felipe Diefenbach 2013-11-14 16:32:17 UTC
Created attachment 824034 [details]
logs do libvirt, engine, vdsm

I'm testing a new instalção oVirt version 3.3 did the following setup

- Fedora 19 with minimal profile
- OVirt 3.3 via yum repository
- Engine -setup with firewalld disabled ( http://www.ovirt.org/Download )
- Installation vdsm ( http://www.ovirt.org/Installing_VDSM_from_rpm ) , ignored the settings described for vdsm.conf because I had problems with the vms the previous setup . I'm using the engine as host node of my cluster , however not used the plugin AIO engine
- Glusterd ( http://www.gluster.org/2013/09/ovirt-3-3-glusterized/ )

After everything went ok I created a vm , however in the beginning it reaches 100 % cpu usage and it does not come out , and click on the spice island returns me the following error :

" Unable to connect to the graphic server"

With vnc console is similar however the problem does not occur and the message screen remains dark .

logs :

[root @ gorpo01 vdsm ] # systemctl status vdsmd
vdsmd.service - Virtual Desktop Server Manager
   Loaded : loaded ( / usr / lib / systemd / system / vdsmd.service ; enabled )
   Active : active ( running ) since Thu 11/14/2013 14:03:52 PM EST ; 20min August
  Process : 21476 ExecStop = / lib / systemd / systemd - vdsmd stop ( code = exited , status = 0/SUCCESS )
  Process : 21660 ExecStart = / lib / systemd / systemd - vdsmd start ( code = exited , status = 0/SUCCESS )
 Main PID : 21814 ( respawn )
   CGroup : name = systemd :/ system / vdsmd.service
           ├ ─ 21814 / bin / bash -e / usr / share / vdsm / respawn - minlifetime 10 - daemon - masterpid / var / run / vdsm / respawn.pid / usr / share / vdsm / vdsm
           ├ ─ 21930 / usr / sbin / GlusterFS - volfile id = - date - volfile - server = gorpo01.ovirtcluster / rhev/data-center/mnt/glusterSD/gorpo01.ovirtcluster : date
           ├ ─ 22297 / usr / bin / python / usr / share / vdsm / vdsm
           ├ ─ 22364 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 35 34
           ├ ─ 22366 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 37 35
           ├ ─ 22423 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 45 42
           ├ ─ 22428 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 53 51
           ├ ─ 22437 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 58 53
           ├ ─ 22440 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 58 55
           ├ ─ 22442 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 70 68
           ├ ─ 22443 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 55 45
           ├ ─ 22447 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 68 55
           └ ─ 30111 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 78 68

Nov 14 14:04:44 gorpo01.ovirtcluster vdsm [ 22297 ] : WARNING vdsm vm.Vm VMID = ` 3f538f8f - 2f41 - 46a7 - 91c5 - 11bbfe89e134 ` :: Unknown type found , device : '{' device ' : u'spicevmc ',' alias ' : ... 3 '} }' found
Nov 14 14:04:45 gorpo01.ovirtcluster vdsm [ 22297 ] : WARNING vdsm vm.Vm VMID = ` 3f538f8f - 2f41 - 46a7 - 91c5 - 11bbfe89e134 ` :: _readPauseCode unsupported by libvirt vm
Nov 14 14:04:48 gorpo01.ovirtcluster vdsm [ 22297 ] : vdsm TaskManager.Task ERROR Task = ` 8becbf76 -7688-472e- 91f3 - f01147b6d130 ` :: Unexpected error
Nov 14 14:04:48 gorpo01.ovirtcluster vdsm [ 22297 ] : vdsm TaskManager.Task ERROR Task = ` 71139a11 - 1A81 - 406f - bf36 - 471b1ead02fb ` :: Unexpected error
Nov 14 14:04:55 gorpo01.ovirtcluster vdsm [ 22297 ] : WARNING vdsm vm.Vm VMID = ` 3f538f8f - 2f41 - 46a7 - 91c5 - 11bbfe89e134 ` :: Unknown type found , device : '{' device ' : u'unix ',' alias ' : u'ch ... 1 '} }' found
Nov 14 14:04:55 gorpo01.ovirtcluster vdsm [ 22297 ] : WARNING vdsm vm.Vm VMID = ` 3f538f8f - 2f41 - 46a7 - 91c5 - 11bbfe89e134 ` :: Unknown type found , device : '{' device ' : u'unix ',' alias ' : u'ch ... 2 '} }' found
Nov 14 14:04:55 gorpo01.ovirtcluster vdsm [ 22297 ] : WARNING vdsm vm.Vm VMID = ` 3f538f8f - 2f41 - 46a7 - 91c5 - 11bbfe89e134 ` :: Unknown type found , device : '{' device ' : u'spicevmc ',' alias ' : ... 3 '} }' found
Nov 14 14:07:03 gorpo01.ovirtcluster vdsm [ 22297 ] : WARNING vdsm vds Drive is not a vdsm image : VOLWM_CHUNK_MB 1024 VOLWM_CHUNK_REPLICATE_MULT : 2 VOLWM_FREE_PCT : 50 _blockDev
Nov 14 14:07:03 gorpo01.ovirtcluster vdsm [ 22297 ] : WARNING vdsm vds Drive is not a vdsm image : VOLWM_CHUNK_MB 1024 VOLWM_CHUNK_REPLICATE_MULT : 2 VOLWM_FREE_PCT : 50 _blockDev
Nov 14 14:07:24 gorpo01.ovirtcluster vdsm [ 22297 ] : WARNING vdsm vm.Vm VMID = ` 3f538f8f - 2f41 - 46a7 - 91c5 - 11bbfe89e134 ` :: _readPauseCode unsupported by libvirt vm

[root @ gorpo01 vdsm ] # rpm -qa | grep libvirt
libvirt - daemon - driver -network- 1.0.5.6 - 3.fc19.x86_64
libvirt - daemon - driver - xen - 1.0.5.6 - 3.fc19.x86_64
libvirt - daemon - driver - nodedev - 1.0.5.6 - 3.fc19.x86_64
libvirt - lxc - driver - daemon - 1.0.5.6 - 3.fc19.x86_64
libvirt -python - 1.0.5.6 - 3.fc19.x86_64
libvirt - daemon - driver - nwfilter - 1.0.5.6 - 3.fc19.x86_64
libvirt - daemon - driver -secret- 1.0.5.6 - 3.fc19.x86_64
libvirt - kvm - daemon - 1.0.5.6 - 3.fc19.x86_64
libvirt - daemon - driver - uml - 1.0.5.6 - 3.fc19.x86_64
libvirt - qemu - daemon - 1.0.5.6 - 3.fc19.x86_64
libvirt -client- 1.0.5.6 - 3.fc19.x86_64
libvirt - qemu - driver - daemon - 1.0.5.6 - 3.fc19.x86_64
libvirt - daemon - driver - interface - 1.0.5.6 - 3.fc19.x86_64
libvirt - daemon - driver - LibXL - 1.0.5.6 - 3.fc19.x86_64
libvirt - 1.0.5.6 - 3.fc19.x86_64
libvirt - daemon - driver -storage - 1.0.5.6 - 3.fc19.x86_64
libvirt - daemon -config -network- 1.0.5.6 - 3.fc19.x86_64
libvirt - daemon - 1.0.5.6 - 3.fc19.x86_64
libvirt - daemon -config - nwfilter - 1.0.5.6 - 3.fc19.x86_64
libvirt -lock- sanlock - 1.0.5.6 - 3.fc19.x86_64
[root @ gorpo01 vdsm ] # rpm -qa | grep oVirt
oVirt -engine- userportal - 3.3.0.1 - 1.fc19.noarch
oVirt -log- collector- 3.3.1 - 1.fc19.noarch
oVirt -engine -setup- 3.3.0.1 - 1.fc19.noarch
oVirt -image- uploader - 3.3.1 - 1.fc19.noarch
oVirt - release- fedora - 8 - 1.noarch
oVirt -host- deploy- java - 1.1.1 - 1.fc19.noarch
oVirt - backend -engine - 3.3.0.1 - 1.fc19.noarch
oVirt -engine- webadmin -portal - 3.3.0.1 - 1.fc19.noarch
oVirt -engine- sdk -python - 3.3.0.7 - 1.fc19.noarch
oVirt -engine- lib - 3.3.0.1 - 1.fc19.noarch
oVirt -engine -tools- 3.3.0.1 - 1.fc19.noarch
oVirt -engine- 3.3.0.1 - 1.fc19.noarch
oVirt -engine- restapi - 3.3.0.1 - 1.fc19.noarch
oVirt - iso -uploader - 3.3.1 - 1.fc19.noarch
oVirt -engine -cli - 3.3.0.5 - 1.fc19.noarch
oVirt -engine- dbscripts - 3.3.0.1 - 1.fc19.noarch
oVirt -host- deploy- 1.1.1 - 1.fc19.noarch
[root @ gorpo01 vdsm ] # rpm -qa | grep qemu
ipxe - roms - qemu - 20130517 - 2.gitc4bce43.fc19.noarch
qemu -system- x86 - 1.4.2 - 12.fc19.x86_64
qemu -system- microblaze - 1.4.2 - 12.fc19.x86_64
qemu -system- mips - 1.4.2 - 12.fc19.x86_64
qemu - img - 1.4.2 - 12.fc19.x86_64
qemu -system- unicore32 - 1.4.2 - 12.fc19.x86_64
qemu -system- xtensa - 1.4.2 - 12.fc19.x86_64
qemu -common- 1.4.2 - 12.fc19.x86_64
qemu -system- arm- 1.4.2 - 12.fc19.x86_64
qemu -system- sparc - 1.4.2 - 12.fc19.x86_64
qemu -system- m68k - 1.4.2 - 12.fc19.x86_64
libvirt - qemu - daemon - 1.0.5.6 - 3.fc19.x86_64
libvirt - qemu - driver - daemon - 1.0.5.6 - 3.fc19.x86_64
qemu - kvm - 1.4.2 - 12.fc19.x86_64
qemu -system- or32 - 1.4.2 - 12.fc19.x86_64
qemu -system- s390x - 1.4.2 - 12.fc19.x86_64
qemu -system- alpha - 1.4.2 - 12.fc19.x86_64
qemu - 1.4.2 - 12.fc19.x86_64
qemu -user- 1.4.2 - 12.fc19.x86_64
qemu -system -cris - 1.4.2 - 12.fc19.x86_64
qemu -system- ppc - 1.4.2 - 12.fc19.x86_64
qemu -system- sh4 - 1.4.2 - 12.fc19.x86_64
qemu -system- LM32 - 1.4.2 - 12.fc19.x86_64
qemu - kvm -tools- 1.4.2 - 12.fc19.x86_64
[root @ gorpo01 vdsm ] # rpm -qa | grep Gluster
GlusterFS - libs - 3.4.1 - 1.fc19.x86_64
GlusterFS -server- 3.4.1 - 1.fc19.x86_64
vdsm - Gluster - 4.12.1 - 4.fc19.noarch
GlusterFS - 3.4.1 - 1.fc19.x86_64
GlusterFS -cli - 3.4.1 - 1.fc19.x86_64
GlusterFS - api - 3.4.1 - 1.fc19.x86_64
GlusterFS - fuse- 3.4.1 - 1.fc19.x86_64
GlusterFS - RDMA - 3.4.1 - 1.fc19.x86_64

Comment 1 Felipe Diefenbach 2013-11-14 18:27:01 UTC
if im configuring vdsm.conf with these parameters vm fails to start session with spice.

	
VM teste is down. Exit message: unsupported configuration: spice secure channels set in XML configuration, but TLS port is not provided.

here the engine menssage

Comment 2 Itamar Heim 2014-01-12 08:43:26 UTC
setting target release to current version for consideration and review. please do not push non-RFE bugs to an undefined target release to make sure bugs are reviewed for relevancy, fix, closure, etc.

Comment 3 Michal Skrivanek 2014-01-27 08:53:50 UTC
(In reply to Felipe Diefenbach from comment #1)
are you running your engine over plain TCP?
How are you opening your console? (browser plugin / .vv file download and remote-viewer file type association / browser client noVNC&spice_html5)

Comment 4 Felipe Diefenbach 2014-01-27 15:08:13 UTC
I'm runing over tcp conection, firefox 25 with spice-xpi instaled with yum, i see if my dns not resolv names correctly spice dont work correct, to waorkaround i'm configuring dns by /etc/hosts and his connect again without a problems.

Comment 5 Michal Skrivanek 2014-02-14 14:35:45 UTC
thanks for confirmation, we kind of rely on resolver working properly.


Note You need to log in before you can comment on or make changes to this bug.