Hide Forgot
Created attachment 824034 [details] logs do libvirt, engine, vdsm I'm testing a new instalção oVirt version 3.3 did the following setup - Fedora 19 with minimal profile - OVirt 3.3 via yum repository - Engine -setup with firewalld disabled ( http://www.ovirt.org/Download ) - Installation vdsm ( http://www.ovirt.org/Installing_VDSM_from_rpm ) , ignored the settings described for vdsm.conf because I had problems with the vms the previous setup . I'm using the engine as host node of my cluster , however not used the plugin AIO engine - Glusterd ( http://www.gluster.org/2013/09/ovirt-3-3-glusterized/ ) After everything went ok I created a vm , however in the beginning it reaches 100 % cpu usage and it does not come out , and click on the spice island returns me the following error : " Unable to connect to the graphic server" With vnc console is similar however the problem does not occur and the message screen remains dark . logs : [root @ gorpo01 vdsm ] # systemctl status vdsmd vdsmd.service - Virtual Desktop Server Manager Loaded : loaded ( / usr / lib / systemd / system / vdsmd.service ; enabled ) Active : active ( running ) since Thu 11/14/2013 14:03:52 PM EST ; 20min August Process : 21476 ExecStop = / lib / systemd / systemd - vdsmd stop ( code = exited , status = 0/SUCCESS ) Process : 21660 ExecStart = / lib / systemd / systemd - vdsmd start ( code = exited , status = 0/SUCCESS ) Main PID : 21814 ( respawn ) CGroup : name = systemd :/ system / vdsmd.service ├ ─ 21814 / bin / bash -e / usr / share / vdsm / respawn - minlifetime 10 - daemon - masterpid / var / run / vdsm / respawn.pid / usr / share / vdsm / vdsm ├ ─ 21930 / usr / sbin / GlusterFS - volfile id = - date - volfile - server = gorpo01.ovirtcluster / rhev/data-center/mnt/glusterSD/gorpo01.ovirtcluster : date ├ ─ 22297 / usr / bin / python / usr / share / vdsm / vdsm ├ ─ 22364 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 35 34 ├ ─ 22366 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 37 35 ├ ─ 22423 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 45 42 ├ ─ 22428 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 53 51 ├ ─ 22437 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 58 53 ├ ─ 22440 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 58 55 ├ ─ 22442 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 70 68 ├ ─ 22443 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 55 45 ├ ─ 22447 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 68 55 └ ─ 30111 / usr / bin / python / usr / share / vdsm / storage / remoteFileHandler.pyc 78 68 Nov 14 14:04:44 gorpo01.ovirtcluster vdsm [ 22297 ] : WARNING vdsm vm.Vm VMID = ` 3f538f8f - 2f41 - 46a7 - 91c5 - 11bbfe89e134 ` :: Unknown type found , device : '{' device ' : u'spicevmc ',' alias ' : ... 3 '} }' found Nov 14 14:04:45 gorpo01.ovirtcluster vdsm [ 22297 ] : WARNING vdsm vm.Vm VMID = ` 3f538f8f - 2f41 - 46a7 - 91c5 - 11bbfe89e134 ` :: _readPauseCode unsupported by libvirt vm Nov 14 14:04:48 gorpo01.ovirtcluster vdsm [ 22297 ] : vdsm TaskManager.Task ERROR Task = ` 8becbf76 -7688-472e- 91f3 - f01147b6d130 ` :: Unexpected error Nov 14 14:04:48 gorpo01.ovirtcluster vdsm [ 22297 ] : vdsm TaskManager.Task ERROR Task = ` 71139a11 - 1A81 - 406f - bf36 - 471b1ead02fb ` :: Unexpected error Nov 14 14:04:55 gorpo01.ovirtcluster vdsm [ 22297 ] : WARNING vdsm vm.Vm VMID = ` 3f538f8f - 2f41 - 46a7 - 91c5 - 11bbfe89e134 ` :: Unknown type found , device : '{' device ' : u'unix ',' alias ' : u'ch ... 1 '} }' found Nov 14 14:04:55 gorpo01.ovirtcluster vdsm [ 22297 ] : WARNING vdsm vm.Vm VMID = ` 3f538f8f - 2f41 - 46a7 - 91c5 - 11bbfe89e134 ` :: Unknown type found , device : '{' device ' : u'unix ',' alias ' : u'ch ... 2 '} }' found Nov 14 14:04:55 gorpo01.ovirtcluster vdsm [ 22297 ] : WARNING vdsm vm.Vm VMID = ` 3f538f8f - 2f41 - 46a7 - 91c5 - 11bbfe89e134 ` :: Unknown type found , device : '{' device ' : u'spicevmc ',' alias ' : ... 3 '} }' found Nov 14 14:07:03 gorpo01.ovirtcluster vdsm [ 22297 ] : WARNING vdsm vds Drive is not a vdsm image : VOLWM_CHUNK_MB 1024 VOLWM_CHUNK_REPLICATE_MULT : 2 VOLWM_FREE_PCT : 50 _blockDev Nov 14 14:07:03 gorpo01.ovirtcluster vdsm [ 22297 ] : WARNING vdsm vds Drive is not a vdsm image : VOLWM_CHUNK_MB 1024 VOLWM_CHUNK_REPLICATE_MULT : 2 VOLWM_FREE_PCT : 50 _blockDev Nov 14 14:07:24 gorpo01.ovirtcluster vdsm [ 22297 ] : WARNING vdsm vm.Vm VMID = ` 3f538f8f - 2f41 - 46a7 - 91c5 - 11bbfe89e134 ` :: _readPauseCode unsupported by libvirt vm [root @ gorpo01 vdsm ] # rpm -qa | grep libvirt libvirt - daemon - driver -network- 1.0.5.6 - 3.fc19.x86_64 libvirt - daemon - driver - xen - 1.0.5.6 - 3.fc19.x86_64 libvirt - daemon - driver - nodedev - 1.0.5.6 - 3.fc19.x86_64 libvirt - lxc - driver - daemon - 1.0.5.6 - 3.fc19.x86_64 libvirt -python - 1.0.5.6 - 3.fc19.x86_64 libvirt - daemon - driver - nwfilter - 1.0.5.6 - 3.fc19.x86_64 libvirt - daemon - driver -secret- 1.0.5.6 - 3.fc19.x86_64 libvirt - kvm - daemon - 1.0.5.6 - 3.fc19.x86_64 libvirt - daemon - driver - uml - 1.0.5.6 - 3.fc19.x86_64 libvirt - qemu - daemon - 1.0.5.6 - 3.fc19.x86_64 libvirt -client- 1.0.5.6 - 3.fc19.x86_64 libvirt - qemu - driver - daemon - 1.0.5.6 - 3.fc19.x86_64 libvirt - daemon - driver - interface - 1.0.5.6 - 3.fc19.x86_64 libvirt - daemon - driver - LibXL - 1.0.5.6 - 3.fc19.x86_64 libvirt - 1.0.5.6 - 3.fc19.x86_64 libvirt - daemon - driver -storage - 1.0.5.6 - 3.fc19.x86_64 libvirt - daemon -config -network- 1.0.5.6 - 3.fc19.x86_64 libvirt - daemon - 1.0.5.6 - 3.fc19.x86_64 libvirt - daemon -config - nwfilter - 1.0.5.6 - 3.fc19.x86_64 libvirt -lock- sanlock - 1.0.5.6 - 3.fc19.x86_64 [root @ gorpo01 vdsm ] # rpm -qa | grep oVirt oVirt -engine- userportal - 3.3.0.1 - 1.fc19.noarch oVirt -log- collector- 3.3.1 - 1.fc19.noarch oVirt -engine -setup- 3.3.0.1 - 1.fc19.noarch oVirt -image- uploader - 3.3.1 - 1.fc19.noarch oVirt - release- fedora - 8 - 1.noarch oVirt -host- deploy- java - 1.1.1 - 1.fc19.noarch oVirt - backend -engine - 3.3.0.1 - 1.fc19.noarch oVirt -engine- webadmin -portal - 3.3.0.1 - 1.fc19.noarch oVirt -engine- sdk -python - 3.3.0.7 - 1.fc19.noarch oVirt -engine- lib - 3.3.0.1 - 1.fc19.noarch oVirt -engine -tools- 3.3.0.1 - 1.fc19.noarch oVirt -engine- 3.3.0.1 - 1.fc19.noarch oVirt -engine- restapi - 3.3.0.1 - 1.fc19.noarch oVirt - iso -uploader - 3.3.1 - 1.fc19.noarch oVirt -engine -cli - 3.3.0.5 - 1.fc19.noarch oVirt -engine- dbscripts - 3.3.0.1 - 1.fc19.noarch oVirt -host- deploy- 1.1.1 - 1.fc19.noarch [root @ gorpo01 vdsm ] # rpm -qa | grep qemu ipxe - roms - qemu - 20130517 - 2.gitc4bce43.fc19.noarch qemu -system- x86 - 1.4.2 - 12.fc19.x86_64 qemu -system- microblaze - 1.4.2 - 12.fc19.x86_64 qemu -system- mips - 1.4.2 - 12.fc19.x86_64 qemu - img - 1.4.2 - 12.fc19.x86_64 qemu -system- unicore32 - 1.4.2 - 12.fc19.x86_64 qemu -system- xtensa - 1.4.2 - 12.fc19.x86_64 qemu -common- 1.4.2 - 12.fc19.x86_64 qemu -system- arm- 1.4.2 - 12.fc19.x86_64 qemu -system- sparc - 1.4.2 - 12.fc19.x86_64 qemu -system- m68k - 1.4.2 - 12.fc19.x86_64 libvirt - qemu - daemon - 1.0.5.6 - 3.fc19.x86_64 libvirt - qemu - driver - daemon - 1.0.5.6 - 3.fc19.x86_64 qemu - kvm - 1.4.2 - 12.fc19.x86_64 qemu -system- or32 - 1.4.2 - 12.fc19.x86_64 qemu -system- s390x - 1.4.2 - 12.fc19.x86_64 qemu -system- alpha - 1.4.2 - 12.fc19.x86_64 qemu - 1.4.2 - 12.fc19.x86_64 qemu -user- 1.4.2 - 12.fc19.x86_64 qemu -system -cris - 1.4.2 - 12.fc19.x86_64 qemu -system- ppc - 1.4.2 - 12.fc19.x86_64 qemu -system- sh4 - 1.4.2 - 12.fc19.x86_64 qemu -system- LM32 - 1.4.2 - 12.fc19.x86_64 qemu - kvm -tools- 1.4.2 - 12.fc19.x86_64 [root @ gorpo01 vdsm ] # rpm -qa | grep Gluster GlusterFS - libs - 3.4.1 - 1.fc19.x86_64 GlusterFS -server- 3.4.1 - 1.fc19.x86_64 vdsm - Gluster - 4.12.1 - 4.fc19.noarch GlusterFS - 3.4.1 - 1.fc19.x86_64 GlusterFS -cli - 3.4.1 - 1.fc19.x86_64 GlusterFS - api - 3.4.1 - 1.fc19.x86_64 GlusterFS - fuse- 3.4.1 - 1.fc19.x86_64 GlusterFS - RDMA - 3.4.1 - 1.fc19.x86_64
if im configuring vdsm.conf with these parameters vm fails to start session with spice. VM teste is down. Exit message: unsupported configuration: spice secure channels set in XML configuration, but TLS port is not provided. here the engine menssage
setting target release to current version for consideration and review. please do not push non-RFE bugs to an undefined target release to make sure bugs are reviewed for relevancy, fix, closure, etc.
(In reply to Felipe Diefenbach from comment #1) are you running your engine over plain TCP? How are you opening your console? (browser plugin / .vv file download and remote-viewer file type association / browser client noVNC&spice_html5)
I'm runing over tcp conection, firefox 25 with spice-xpi instaled with yum, i see if my dns not resolv names correctly spice dont work correct, to waorkaround i'm configuring dns by /etc/hosts and his connect again without a problems.
thanks for confirmation, we kind of rely on resolver working properly.