Bug 832369

Summary: A live Vm's status is changed to "Not responding" during installation.
Product: [Retired] oVirt Reporter: Mark Wu <wudxw>
Component: ovirt-engine-webadminAssignee: Einav Cohen <ecohen>
Status: CLOSED WONTFIX QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.1 RCCC: acathrow, dyasny, ecohen, iheim, mgoldboi, michal.skrivanek, ykaul
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: virt
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-03-12 08:52:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Mark Wu 2012-06-15 09:19:50 UTC
Description of problem:
Vm's status is changed to "Not responding" during installation. I can see the installation process is still on going in spice add-on window. I also notice that the engine still polls the VM's stat with vmGetStats from vdsm.log and vdsm reports the status of that VM is up.  And host status is 'up' in the web ui.
No related messages found in engine.log,  so I suspect it's a bug of engine webadmin.


./vdsm.log:Thread-17105::DEBUG::2012-06-15 17:05:13,994::BindingXMLRPC::859::vds::(wrapper) client [192.168.122.1]::call vmGetStats with ('a2ba8756-1b3c-47f7-9793-7f78aaf7e021',) {}
./vdsm.log:Thread-17105::DEBUG::2012-06-15 17:05:13,996::BindingXMLRPC::865::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Up', 'username': 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid': '6878', 'displayIp': '0', 'displayPort': u'5900', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeOffset': -2L, 'hash': '5685342449532524952', 'pauseCode': 'NOERR', 'clientIp': '192.168.122.1', 'kvmEnable': 'true', 'network': {}, 'vmId': 'a2ba8756-1b3c-47f7-9793-7f78aaf7e021', 'displayType': 'qxl', 'cpuUser': '13.32', 'disks': {u'hdc': {'flushLatency': '0', 'readLatency': '0', 'writeLatency': '0'}, u'hda': {'readLatency': '6104988', 'apparentsize': '5368709120', 'writeLatency': '43576967', 'imageID': '1152db7d-3e52-4d6f-9e8c-48d31137798e', 'flushLatency': '76625', 'readRate': '24384.89', 'truesize': '5368713216', 'writeRate': '119733.22'}}, 'monitorResponse': '-1', 'statsAge': '2162.34', 'cpuIdle': '86.68', 'elapsedTime': '4433', 'vmType': 'kvm', 'cpuSys': '0.00', 'appsList': [], 'guestIPs': '', 'nice': ''}]}




Version-Release number of selected component (if applicable):
ovirt-engine-webadmin-portal-3.1.0_0001-1.8.fc17.noarch
ovirt-engine-core-3.1.0_0001-1.8.fc17.noarch

How reproducible:
I have seen two times.

Steps to Reproduce:
Install a VM, and wait for about 40mins.  In my case, the installation is pretty slow. 
  
Actual results:


Expected results:


Additional info:

Comment 1 Andrew Cathrow 2012-06-16 16:26:13 UTC
What version of libvirt are you running - there was a recent fix to address this.

Comment 2 Mark Wu 2012-06-18 00:27:36 UTC
I am using libvirt-0.9.11.3-1.fc17.  Are you talking about this problem:
https://bugzilla.redhat.com/show_bug.cgi?id=828633

I am sure it's not the same issue, because I have met the sanlock issue and disable it in qemu.conf to workaround it. The problem I reported here happened with sanlock disabled.

Comment 3 Itamar Heim 2013-03-12 08:52:25 UTC
Closing old bugs. If this issue is still relevant/important in current version, please re-open the bug.