Bug 832369 - A live Vm's status is changed to "Not responding" during installation.
A live Vm's status is changed to "Not responding" during installation.
Status: CLOSED WONTFIX
Product: oVirt
Classification: Community
Component: ovirt-engine-webadmin (Show other bugs)
3.1 RC
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Einav Cohen
virt
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-06-15 05:19 EDT by Mark Wu
Modified: 2016-01-28 16:55 EST (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-03-12 04:52:25 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Mark Wu 2012-06-15 05:19:50 EDT
Description of problem:
Vm's status is changed to "Not responding" during installation. I can see the installation process is still on going in spice add-on window. I also notice that the engine still polls the VM's stat with vmGetStats from vdsm.log and vdsm reports the status of that VM is up.  And host status is 'up' in the web ui.
No related messages found in engine.log,  so I suspect it's a bug of engine webadmin.


./vdsm.log:Thread-17105::DEBUG::2012-06-15 17:05:13,994::BindingXMLRPC::859::vds::(wrapper) client [192.168.122.1]::call vmGetStats with ('a2ba8756-1b3c-47f7-9793-7f78aaf7e021',) {}
./vdsm.log:Thread-17105::DEBUG::2012-06-15 17:05:13,996::BindingXMLRPC::865::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Up', 'username': 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid': '6878', 'displayIp': '0', 'displayPort': u'5900', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeOffset': -2L, 'hash': '5685342449532524952', 'pauseCode': 'NOERR', 'clientIp': '192.168.122.1', 'kvmEnable': 'true', 'network': {}, 'vmId': 'a2ba8756-1b3c-47f7-9793-7f78aaf7e021', 'displayType': 'qxl', 'cpuUser': '13.32', 'disks': {u'hdc': {'flushLatency': '0', 'readLatency': '0', 'writeLatency': '0'}, u'hda': {'readLatency': '6104988', 'apparentsize': '5368709120', 'writeLatency': '43576967', 'imageID': '1152db7d-3e52-4d6f-9e8c-48d31137798e', 'flushLatency': '76625', 'readRate': '24384.89', 'truesize': '5368713216', 'writeRate': '119733.22'}}, 'monitorResponse': '-1', 'statsAge': '2162.34', 'cpuIdle': '86.68', 'elapsedTime': '4433', 'vmType': 'kvm', 'cpuSys': '0.00', 'appsList': [], 'guestIPs': '', 'nice': ''}]}




Version-Release number of selected component (if applicable):
ovirt-engine-webadmin-portal-3.1.0_0001-1.8.fc17.noarch
ovirt-engine-core-3.1.0_0001-1.8.fc17.noarch

How reproducible:
I have seen two times.

Steps to Reproduce:
Install a VM, and wait for about 40mins.  In my case, the installation is pretty slow. 
  
Actual results:


Expected results:


Additional info:
Comment 1 Andrew Cathrow 2012-06-16 12:26:13 EDT
What version of libvirt are you running - there was a recent fix to address this.
Comment 2 Mark Wu 2012-06-17 20:27:36 EDT
I am using libvirt-0.9.11.3-1.fc17.  Are you talking about this problem:
https://bugzilla.redhat.com/show_bug.cgi?id=828633

I am sure it's not the same issue, because I have met the sanlock issue and disable it in qemu.conf to workaround it. The problem I reported here happened with sanlock disabled.
Comment 3 Itamar Heim 2013-03-12 04:52:25 EDT
Closing old bugs. If this issue is still relevant/important in current version, please re-open the bug.

Note You need to log in before you can comment on or make changes to this bug.