Bug 832369 - A live Vm's status is changed to "Not responding" during installation.
Summary: A live Vm's status is changed to "Not responding" during installation.
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: oVirt
Classification: Retired
Component: ovirt-engine-webadmin
Version: 3.1 RC
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Einav Cohen
QA Contact:
URL:
Whiteboard: virt
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-06-15 09:19 UTC by Mark Wu
Modified: 2016-01-28 21:55 UTC (History)
7 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2013-03-12 08:52:25 UTC
oVirt Team: ---
Embargoed:


Attachments (Terms of Use)

Description Mark Wu 2012-06-15 09:19:50 UTC
Description of problem:
Vm's status is changed to "Not responding" during installation. I can see the installation process is still on going in spice add-on window. I also notice that the engine still polls the VM's stat with vmGetStats from vdsm.log and vdsm reports the status of that VM is up.  And host status is 'up' in the web ui.
No related messages found in engine.log,  so I suspect it's a bug of engine webadmin.


./vdsm.log:Thread-17105::DEBUG::2012-06-15 17:05:13,994::BindingXMLRPC::859::vds::(wrapper) client [192.168.122.1]::call vmGetStats with ('a2ba8756-1b3c-47f7-9793-7f78aaf7e021',) {}
./vdsm.log:Thread-17105::DEBUG::2012-06-15 17:05:13,996::BindingXMLRPC::865::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Up', 'username': 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid': '6878', 'displayIp': '0', 'displayPort': u'5900', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeOffset': -2L, 'hash': '5685342449532524952', 'pauseCode': 'NOERR', 'clientIp': '192.168.122.1', 'kvmEnable': 'true', 'network': {}, 'vmId': 'a2ba8756-1b3c-47f7-9793-7f78aaf7e021', 'displayType': 'qxl', 'cpuUser': '13.32', 'disks': {u'hdc': {'flushLatency': '0', 'readLatency': '0', 'writeLatency': '0'}, u'hda': {'readLatency': '6104988', 'apparentsize': '5368709120', 'writeLatency': '43576967', 'imageID': '1152db7d-3e52-4d6f-9e8c-48d31137798e', 'flushLatency': '76625', 'readRate': '24384.89', 'truesize': '5368713216', 'writeRate': '119733.22'}}, 'monitorResponse': '-1', 'statsAge': '2162.34', 'cpuIdle': '86.68', 'elapsedTime': '4433', 'vmType': 'kvm', 'cpuSys': '0.00', 'appsList': [], 'guestIPs': '', 'nice': ''}]}




Version-Release number of selected component (if applicable):
ovirt-engine-webadmin-portal-3.1.0_0001-1.8.fc17.noarch
ovirt-engine-core-3.1.0_0001-1.8.fc17.noarch

How reproducible:
I have seen two times.

Steps to Reproduce:
Install a VM, and wait for about 40mins.  In my case, the installation is pretty slow. 
  
Actual results:


Expected results:


Additional info:

Comment 1 Andrew Cathrow 2012-06-16 16:26:13 UTC
What version of libvirt are you running - there was a recent fix to address this.

Comment 2 Mark Wu 2012-06-18 00:27:36 UTC
I am using libvirt-0.9.11.3-1.fc17.  Are you talking about this problem:
https://bugzilla.redhat.com/show_bug.cgi?id=828633

I am sure it's not the same issue, because I have met the sanlock issue and disable it in qemu.conf to workaround it. The problem I reported here happened with sanlock disabled.

Comment 3 Itamar Heim 2013-03-12 08:52:25 UTC
Closing old bugs. If this issue is still relevant/important in current version, please re-open the bug.


Note You need to log in before you can comment on or make changes to this bug.