Bug 994154

Summary: VM rebooted in resume action
Product: Red Hat Enterprise Virtualization Manager Reporter: vvyazmin <vvyazmin>
Component: vdsmAssignee: Shahar Havivi <shavivi>
Status: CLOSED CURRENTRELEASE QA Contact: vvyazmin <vvyazmin>
Severity: urgent Docs Contact:
Priority: high    
Version: 3.3.0CC: bazulay, hateya, iheim, lpeer, lsvaty, michal.skrivanek, mperina, vvyazmin, yeylon
Target Milestone: ---Keywords: Regression, Triaged
Target Release: 3.3.0   
Hardware: x86_64   
OS: Linux   
Whiteboard: virt
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-08-11 11:05:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
## Logs rhevm, vdsm, libvirt, thread dump, superVdsm none

Description vvyazmin@redhat.com 2013-08-06 15:18:19 UTC
Created attachment 783377 [details]
## Logs rhevm, vdsm, libvirt, thread dump, superVdsm

Description of problem:
VM rebooted in resume action

Version-Release number of selected component (if applicable):
RHEVM 3.3 - IS8 environment:

RHEVM:  rhevm-3.3.0-0.13.master.el6ev.noarch
VDSM:  vdsm-4.12.0-rc3.13.git06ed3cc.el6ev.x86_64
LIBVIRT:  libvirt-0.10.2-18.el6_4.9.x86_64
QEMU & KVM:  qemu-kvm-rhev-0.12.1.2-2.355.el6_4.5.x86_64
SANLOCK:  sanlock-2.6-2.el6.x86_64
PythonSDK:  rhevm-sdk-python-3.3.0.8-1.el6ev.noarch

How reproducible:
100%

Steps to Reproduce:
Create VM with one or multiple disk
Install OS 
Wait till your OS finish boot. 
Click “Suspend” 
Wait till VM get status “Suspended”.
Click “Run” (resume) 
Open console 

Actual results:
VM is rebooted

Expected results:
Successfully resumed 

Impact on user:
Application that were running on OS closed 

Workaround:
none

Additional info:

/var/log/ovirt-engine/engine.log

/var/log/vdsm/vdsm.log

Thread-42930::DEBUG::2013-08-06 15:47:52,413::vm::2034::vm.Vm::(_startUnderlyingVm) vmId=`c0c45eb3-310e-45a0-af6c-72423bbed8a0`::_ongoingCreations released
Thread-42930::ERROR::2013-08-06 15:47:52,413::vm::2060::vm.Vm::(_startUnderlyingVm) vmId=`c0c45eb3-310e-45a0-af6c-72423bbed8a0`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 2020, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/vm.py", line 2866, in _run
    domxml = hooks.before_vm_start(self._buildCmdLine(), self.conf)
  File "/usr/share/vdsm/vm.py", line 2679, in _buildCmdLine
    self._appendDevices(domxml)
  File "/usr/share/vdsm/vm.py", line 2646, in _appendDevices
    deviceXML = dev.getXML().toxml(encoding='utf-8')
  File "/usr/share/vdsm/vm.py", line 1208, in getXML
    vram=self.specParams['vram'], heads='1')
AttributeError: 'VideoDevice' object has no attribute 'specParams'
Thread-42930::DEBUG::2013-08-06 15:47:52,425::vm::2444::vm.Vm::(setDownStatus) vmId=`c0c45eb3-310e-45a0-af6c-72423bbed8a0`::Changed state to Down: 'VideoDevice' object has no attribute 'specParams'

Comment 2 Michal Skrivanek 2013-08-07 06:43:26 UTC
possible dupe of bug 984586, Martin, please close if confirmed...

Comment 3 Michal Skrivanek 2013-08-07 06:51:15 UTC
actually looks more related to multidisplay on single QXL, Shahar?

btw wrong logs are attached

Comment 4 Shahar Havivi 2013-08-07 07:26:59 UTC
(In reply to Michal Skrivanek from comment #3)
> actually looks more related to multidisplay on single QXL, Shahar?
Yes it does...

> 
> btw wrong logs are attached

Comment 5 Shahar Havivi 2013-08-07 07:37:25 UTC
Actually it's may not be related to QXL fix...

this is code before the fix:
vram=self.specParams['vram'], heads='1')

and after the QXL fix:
vram=self.specParams['vram'],
heads=self.specParams.get('heads', '1'))

Comment 6 Martin Perina 2013-08-07 08:38:27 UTC
I cannot reproduce it on latest ovirt-engine-3.3 branch, it works there. And it also works even without 984586 patch applied.

Comment 7 Shahar Havivi 2013-08-07 09:06:42 UTC
Cannot reproduce as well...
Its look like the problem you encounter is related to your vdsm server, it didn't save the VMs data (in this case the specParams).
Can you please try to restart the vdsm service and see if you can reproduce the bug?

Comment 8 Martin Perina 2013-08-07 09:20:47 UTC
I'm unable to reproduce even on latest oVirt from master branch and vdsm nighly build.

Comment 9 vvyazmin@redhat.com 2013-08-08 13:46:41 UTC
I can reproduce it again, you are welcome to see it reproduced.

Comment 11 Shahar Havivi 2013-08-11 11:05:39 UTC
Works in version: 9.1