Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1133298

Summary: Error "No JSON object could be decoded" thrown by VDSM VM Channels Listener
Product: Red Hat Enterprise Virtualization Manager Reporter: Yuri Obshansky <yobshans>
Component: vdsmAssignee: Vinzenz Feenstra [evilissimo] <vfeenstr>
Status: CLOSED INSUFFICIENT_DATA QA Contact: meital avital <mavital>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.4.1-1CC: amit.shah, amureini, bazulay, ecohen, ghammer, iheim, jhunsaker, lpeer, michal.skrivanek, yeylon, ylavi, yobshans
Target Milestone: ovirt-3.5.6Keywords: Reopened, Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard: virt
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-10-27 11:29:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
vdsm log
none
libvirt log
none
qemu log none

Description Yuri Obshansky 2014-08-24 11:29:35 UTC
Description of problem:
Errors occurred several times on host which manages about 140 VMs 
in RHEVM large scale environment.
RHEVM environment has 160 Data Centers (which included Clusters, Fake Hosts are running on VMs and Storage Domains)

VM Channels Listener::ERROR::2014-08-20 12:14:02,859::guestIF::402::vm.Vm::(_processMessage) vmId=`110cabce-cf40-47ee-a8b8-96cda7fe9274`::No JSON object could be decoded: '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00{"__name__": "host-name", "name": "vm-18-228.eng.lab.tlv.redhat.com"}'
VM Channels Listener::ERROR::2014-08-20 12:18:37,606::guestIF::402::vm.Vm::(_processMessage) vmId=`342172de-feb5-4a63-a7a7-74e46bd370b9`::No JSON object could be decoded: '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00{"__name__": "host-name", "name": "vm-18-36.eng.lab.tlv.redhat.com"}'


VM Channels Listener::ERROR::2014-08-20 12:18:39,066::guestIF::402::vm.Vm::(_processMessage) vmId=`17998235-ea93-436d-b11c-6f33510335b3`::No JSON object could be decoded: '\x98*\x05w\x00\x88\xff\xff\x0c\x06\x0c\x06>\x00\x10\x00Z\x0e\x00\x00\x00\x00\x00\x00\xb0*\x05w\x00\x88\xff\xff\xcd\x03\x8c\x01>\x00\x10\x00Z\x0e\x00\x00\x00\x00\x00\x00`?\xc0v\x00\x88\xff\xff\xc2\x03\x1a\x0f\x06\x00\x01\x00\xff\x1f\x10\x00\x00\x00\x00\x00\xe0*\x05w\x00\x88\xff\xff\xc2\x03\x19\x0f'
VM Channels Listener::ERROR::2014-08-20 12:18:39,070::guestIF::402::vm.Vm::(_processMessage) vmId=`17998235-ea93-436d-b11c-6f33510335b3`::No JSON object could be decoded: '\x00\x01\x00\x90\x01\x00\x00\x00\x00\x00\x00\xf8*\x05w\x00\x88\xff\xff\xc2\x03\x19\x0b'
VM Channels Listener::ERROR::2014-08-20 12:18:39,075::guestIF::402::vm.Vm::(_processMessage) vmId=`17998235-ea93-436d-b11c-6f33510335b3`::No JSON object could be decoded: '\x00\x01\x00\x90\x01\x00\x00\x00\x00\x00\x00x?\xc0v\x00\x88\xff\xff\xc2\x03\x19\x07'
VM Channels Listener::ERROR::2014-08-20 12:18:39,079::guestIF::402::vm.Vm::(_processMessage) vmId=`17998235-ea93-436d-b11c-6f33510335b3`::No JSON object could be decoded: '\x00\x01\x00\x90\x01\x00\x00\x00\x00\x00\x00\x90?\xc0v\x00\x88\xff\xff\xc2\x03\x1a\x03\x06\x00\x01\x00\xff\x1f\x10\x00\x00\x00\x00\x00\xc0?\xc0v\x00\x88\xff\xff\xc2\x03\x19\x03'
VM Channels Listener::ERROR::2014-08-20 12:18:39,083::guestIF::402::vm.Vm::(_processMessage) vmId=`17998235-ea93-436d-b11c-6f33510335b3`::No JSON object could be decoded: '\x00\x01\x00\x90\x01\x00\x00\x00\x00\x00\x00X+\x05w\x00\x88\xff\xff\xad\x03\x8c\x01>\x00\x10\x00Z\x0e\x00\x00\x00\x00\x00\x00\xd8?\xc0v\x00\x88\xff\xff\x9d\x03\x8c{"__name__": "host-name", "name": "vm-18-13.eng.lab.tlv.redhat.com"}'


Version-Release number of selected component (if applicable):
RHEVM Version: 3.4.1-0.25.el6ev
OS Version: RHEL - 6Server - 6.5.0.1.el6
Kernel Version: 2.6.32 - 431.el6.x86_64
KVM Version: 0.12.1.2 - 2.415.el6_5.9
LIBVIRT Version: libvirt-0.10.2-29.el6_5.7
VDSM Version: vdsm-4.14.7-2.el6ev


How reproducible:
Unfortunately, I cannot provide steps to reproduce. 

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Yuri Obshansky 2014-08-24 11:31:46 UTC
Created attachment 930119 [details]
vdsm log

Comment 2 Michal Skrivanek 2014-08-25 06:49:26 UTC
looks like random data coming from the VM
either the guest agent went crazy (which sounds unlikely) or there's a bug in the virtio-serial interface(it wouldn't be the first;)

Comment 3 Vinzenz Feenstra [evilissimo] 2014-08-26 08:44:13 UTC
(In reply to Michal Skrivanek from comment #2)
> looks like random data coming from the VM
> either the guest agent went crazy (which sounds unlikely) or there's a bug
> in the virtio-serial interface(it wouldn't be the first;)
It looks like random data comes from the VIO channel after the reboot.
The random data we can see in front of the message is the first message the guest agent sends when it starts ('host-name') subsequent messages did not have this problem anymore.
Seems like a problem with the VIO channel (e.g. not initialized variables or something) the data seems to be some random memory buffer passed to the vio channel on the host.


@Yuri:
Please attach libvirt and qemu logs for that host.
Also please report the qemu version. Thanks.

Comment 4 Yuri Obshansky 2014-08-26 10:40:14 UTC
Created attachment 930818 [details]
libvirt log

Comment 5 Yuri Obshansky 2014-08-26 10:43:07 UTC
@Vinzenz:
How can I check qemu version?
Thanks

Comment 6 Gal Hammer 2014-08-26 10:44:34 UTC
What is the virtio-serial driver version? And which Windows version is running on the guest?(In reply to Yuri Obshansky from comment #5)

> @Vinzenz:
> How can I check qemu version?
> Thanks

KVM Version: 0.12.1.2 - 2.415.el6_5.9

Comment 7 Yuri Obshansky 2014-08-26 10:49:06 UTC
root@ucs1-b420-1 libvirt]# kvm --version
-bash: kvm: command not found
[root@ucs1-b420-1 libvirt]# uname -r 
2.6.32-431.23.3.el6.x86_64

@Gal:
No Windows
RHEL - 6Server - 6.5.0.1.el6

Comment 8 Yuri Obshansky 2014-08-26 10:55:07 UTC
[root@ucs1-b420-1 libvirt]# rpm -qa | grep kvm
qemu-kvm-rhev-0.12.1.2-2.415.el6_5.10.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.415.el6_5.10.x86_64

Comment 9 Gal Hammer 2014-08-26 13:18:12 UTC
(In reply to Yuri Obshansky from comment #7)
> root@ucs1-b420-1 libvirt]# kvm --version
> -bash: kvm: command not found
> [root@ucs1-b420-1 libvirt]# uname -r 
> 2.6.32-431.23.3.el6.x86_64
> 
> @Gal:
> No Windows
> RHEL - 6Server - 6.5.0.1.el6

Good to know.

Can you please check which version of the guest agent is running inside a problematic guest? And what is that guest was executed (its command line)?

Comment 10 Yuri Obshansky 2014-08-26 13:22:08 UTC
rhevm-guest-agent-common-1.0.9-1.el6ev
@Gal:
What do you mean?
"And what is that guest was executed (its command line)?"
Log files from VM?

Comment 15 Yuri Obshansky 2014-08-31 06:11:01 UTC
Created attachment 933048 [details]
qemu log

Comment 16 Vinzenz Feenstra [evilissimo] 2014-09-01 07:37:12 UTC
Gal could you please check if there's anything what seems wrong on the command line? Thanks.

Comment 17 Gal Hammer 2014-09-01 08:03:24 UTC
(In reply to Vinzenz Feenstra [evilissimo] from comment #16)
> Gal could you please check if there's anything what seems wrong on the
> command line? Thanks.

Looks okay to me.

Expect for the "block I/O error in device 'drive-virtio-disk0': Input/output error (5)" messages...

Comment 18 Vinzenz Feenstra [evilissimo] 2014-09-01 12:28:20 UTC
@Yuri:

We can see in the qemu logs that there were several IO errors. Did your VMs get once paused due to the IO Error? And you resume the VMs afterwards?

Comment 19 Yuri Obshansky 2014-09-01 12:33:14 UTC
I'm not sure what happened. It was before 1 week ago.
There were several Fake hosts crashes 
(Fake host is VM running vdsm with fake qemu vdsm plugin).

Comment 20 Vinzenz Feenstra [evilissimo] 2014-09-01 12:42:13 UTC
The VM vm-18-13.eng.lab.tlv.redhat.com scale_169 was also a Fake VM?
That would be pretty strange, what kind of guest agent are you using there?

If not and this is a real VM please let me know the kernel version and guest agent version.
Thanks.

Comment 21 Yuri Obshansky 2014-09-01 12:55:02 UTC
The VM vm-18-13.eng.lab.tlv.redhat.com scale_169 was real VM.
kernel-2.6.32-431.el6
rhevm-guest-agent-common-1.0.9-1.el6ev
When I create Fake host I provide IP address of real VM 
(in my case vm-18-13.eng.lab.tlv.redhat.com scale_169)
And it is running

Comment 22 Vinzenz Feenstra [evilissimo] 2014-09-04 12:37:20 UTC
@Amit

Are there any additional information we could gather from the host or guest system which you think are necessary to figure out what could cause this issue?

I am not really able to reproduce this issue though :(

Comment 23 Amit Shah 2014-09-09 04:58:35 UTC
I can't see anything wrong; you have 3 virtio-serial ports, one for rhv.vdsm, one for qemu-guest-agent, one for spicevmc.  It's possible one of those three are sending bad input.

It's also possible they went into a state they didn't expect to go, and are just flushing the queue.

Can't think of other hints.

A few things to try, though: does it happen with Windows and Linux guests?  If only one type is affected, it's likely a guest bug (which I think is the case here -- also because this only happens on reboot).  Try enabling just one agent at a time and see which one gives that output.  This could help identify whether it's a guest agent or a guest driver problem.

Comment 24 Vinzenz Feenstra [evilissimo] 2014-10-07 09:28:57 UTC
Yuri could you please follow up on the comments from Comment#23?

Unfortunately I am not able to reproduce this in my own environment.

Comment 25 Yuri Obshansky 2014-10-07 13:15:03 UTC
Sorry for delay. 
Only Linux guests.
In happened during Limits scale test when I created a huge amount of Data Centers.
Currently we pause an execution of Limits scale test because we don't have enough hardware resources. In other configurations that bug didn't reproduce.
So, let's wait till I'll return back to Limits scale test or close bug.
Regards,
Yuri

Comment 26 Vinzenz Feenstra [evilissimo] 2014-10-07 13:23:58 UTC
Yuri, since we are not able to reproduce this in in a more sane way otherwise I will close this bug for now with INSUFFIENT DATA, please reopen when you encounter this again.

Thanks a lot.

Comment 27 Jake Hunsaker 2015-10-13 17:16:13 UTC
I am re-opening this bug as we have a customer hitting this again. Per the customer:

On 09/15 09:43, Windows 2008 guest (named BST_TC_WEB5) encountered hang.
So, the customer powered OFF/ON BST_TC_WEB5 and BST_TC_WEB5 was up on 10:05..

*NOTE* We need to add 9 hour with time stamp for logs of Hypervisor (i.e. cloud2-hv2), because this host is running with UTC.
On the other hand, RHEV-M (cloud2-manager) is running with JST.


cloud2-hv02-2015091506141442297665/var/log/libvirt/qemu/BST_TC_WEB5.log:

2015-09-03 08:05:36.511+0000: starting up
<snip>
qemu: terminating on signal 15 from pid 33227
2015-09-15 00:58:52.062+0000: shutting down <<<<< 09:58:52 (JST)
2015-09-15 01:02:43.467+0000: starting up
<snip>

vdsm.log shows this: 


From vdsm.log.3, the following messages were logged around 09:53:43 - 09:53:44 (JST)

# grep 0b833dd5-32c6-4fac-a42d-c073ccb490e3 vdsm.log.3
VM Channels Listener::ERROR::2015-09-15 00:53:43,424::guestagent::417::vm.Vm::(_processMessage) vmId=`0b833dd5-32c6-4fac-a42d-c073ccb490e3`::Extra data: line 1 column 1 - line 1 column 953 (char 1 - 953): '0\x1a\x00\x00\x00\x00\x00\x00\xd0\x1e\xbf\x86\xd0\x1e\xbf\x86\x00\x00\x10\x00\x00\x00\x00\x000\x1a\x00\x00\x00\x00\x00\x000\x1a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf0\x1e\xbf\x86\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x90q\x98\x81\x1c\xb3\xff\x840\x04\x0c\x00\x00\x00\x00\x00\x00\x00\x00\x00\xc8AM\x91\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0c\x93\xc4\x8c0\xa5t\x88X\x1f\xbf\x86X\x1f\xbf\x86\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00p\xa6{\x8dp\xa6{\x8dx\xa6{\x8dx\xa6{\x8d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xac\xa6{\x8d\xac\xa6{\x8d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xc0\x9fx\x8c0\xa7{\x8d\xc19\x90\x81 \xc0\xc1\x84\x00\x00\x00\x00 \xff\x84\x10\xab{\x8d\x00\xb0{\x8d\x00\x80{\x8d\x00\xa2{\x8d\x00\xb0{\x8d\x00\x80{p\xa6{\x8dp\xa6{\x8dx\xa6{\x8dx\xa6{\x8d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xac\xa6{\x8d\xac\xa6{\x8d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xc0\x9fx\x8c0\xa7{\x8d\xc19\x90\x81 \xc0\xc1\x84\x00\x00\x00\x00 \xff\x84\x10\xab{\x8d\x00\xb0{\x8d\x00\x80{\x8d\x00\xa2{\x8d\x00\xb0{\x8d\x00\x80{p\xa6{\x8dp\xa6{\x8dx\xa6{\x8dx\xa6{\x8d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xac\xa6{\x8d\xac\xa6{\x8d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xc0\x9fx\x8c0\xa7{\x8d\xc19\x90\x81 \xc0\xc1\x84\x00\x00\x00\x00 \xff\x84\x10\xab{\x8d\x00\xb0{\x8d\x00\x80{\x8d\x00\xa2{\x8d\x00\xb0{\x8d\x00\x80{{"__name__": "heartbeat", "memory-stat": {"swap_out": 0, "majflt": 0, "mem_free": "2783376", "swap_in": 0, "pageflt": 0, "mem_total": "4192976", "mem_unused": "2783376"}, "free-ram": "2718"}'
VM Channels Listener::ERROR::2015-09-15 00:53:43,502::guestagent::417::vm.Vm::(_processMessage) vmId=`0b833dd5-32c6-4fac-a42d-c073ccb490e3`::No JSON object could be decoded: ''
VM Channels Listener::ERROR::2015-09-15 00:53:43,505::guestagent::417::vm.Vm::(_processMessage) vmId=`0b833dd5-32c6-4fac-a42d-c073ccb490e3`::No JSON object could be decoded: '\x04\xa5x\x86\x10\x00\x00\x00|\xa5x\x86\x14\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x07\x00!\x18NSpg\x08\x96{\x86\x00\x00\x00\x00 \xa5x\x86\x07\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x008\xa5x\x86\x08\x00\x00\x00L\xa5x\x86\x98\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00@\xa5x\x86\x0c\x00\x00\x00\x18\x00\x00\x00\x01\x00\x00\x00\x00J\x00\xeb\x1a\x9b\xd4\x11\x91#\x00P\x04wY\xbc\x00\x00\x00'
VM Channels Listener::ERROR::2015-09-15 00:53:43,510::guestagent::417::vm.Vm::(_processMessage) vmId=`0b833dd5-32c6-4fac-a42d-c073ccb490e3`::No JSON object could be decoded:

<snip>

VM Channels Listener::ERROR::2015-09-15 00:53:44,117::guestagent::417::vm.Vm::(_processMessage) vmId=`0b833dd5-32c6-4fac-a42d-c073ccb490e3`::No JSON object could be decoded: '\x00\x00\x06\x00\x00\x00\x00'
VM Channels Listener::ERROR::2015-09-15 00:53:44,118::guestagent::417::vm.Vm::(_processMessage) vmId=`0b833dd5-32c6-4fac-a42d-c073ccb490e3`::No JSON object could be decoded:




Attaching data shortly.

Comment 29 Jake Hunsaker 2015-10-13 17:19:30 UTC
Driver versions in use:

RHEV-Balloon
Version ................................... 4.9.4
Installed ................................. 2013/07/19
Install Location .......................... C:\Program Files\Redhat\RHEV\Drivers\
Registered Owner .......................... DirectAnswer
Registered Company ........................ TCI-DMS
Vendor .................................... Red Hat Inc.
RHEV-Spice-Agent
Version ................................... 4.9.5
Installed ................................. 2013/07/19
Install Location .......................... C:\Program Files\Redhat\RHEV\Drivers\Spice\
Registered Owner .......................... DirectAnswer
Registered Company ........................ TCI-DMS
Vendor .................................... Red Hat Inc.
RHEV-Serial
Version ................................... 4.9.4
Installed ................................. 2013/07/19
Install Location .......................... C:\Program Files\Redhat\RHEV\Drivers\
Registered Owner .......................... DirectAnswer
Registered Company ........................ TCI-DMS
Vendor .................................... Red Hat Inc.
RHEV-Agent
Version ................................... 4.9.5
Installed ................................. 2013/07/19
Install Location .......................... C:\Program Files\Redhat\RHEV\Drivers\Agent\
Registered Owner .......................... DirectAnswer
Registered Company ........................ TCI-DMS
Vendor .................................... Red Hat Inc.
RHEV-Block
Version ................................... 4.9.4
Installed ................................. 2013/07/19
Install Location .......................... C:\Program Files\Redhat\RHEV\Drivers\
Registered Owner .......................... DirectAnswer
Registered Company ........................ TCI-DMS
Vendor .................................... Red Hat Inc.
RHEV-Network
Version ................................... 4.9.4
Installed ................................. 2013/07/19
Install Location .......................... C:\Program Files\Redhat\RHEV\Drivers\Network\
Registered Owner .......................... DirectAnswer
Registered Company ........................ TCI-DMS
Vendor .................................... Red Hat Inc.

Comment 30 Vinzenz Feenstra [evilissimo] 2015-10-14 09:58:54 UTC
Well first thing to note is that those drivers on the guest are really old. The first thing I'd like to ask is to upgrade the guest tools and drivers on the guest side and re-report if it still happens.

Additionally please provide the logs from the hypervisor where this was happening.
I'd need logs mainly for vdsm for the affected time frame to get some idea what is happening around it.

Thanks.

Comment 31 Yuri Obshansky 2015-10-14 10:09:25 UTC
Hi, 
I provided vdsm log (see attachment).
Now, I don't have environment to reproduce it.
Bug was opened before more 1 year.

Comment 32 Jake Hunsaker 2015-10-26 14:21:28 UTC
Have not heard from customer since suggesting the tools upgrade. Will reach out.

Comment 33 Vinzenz Feenstra [evilissimo] 2015-10-27 11:29:28 UTC
Since we were not able to reproduce this bug even earlier, I will close this for now and please reopen when the customer responds.

In case of a response and if it happens again with newer guestTools please do report the following additional details:

- From the Hypervisor (on which the VM runs)
- qemu version
- libvirt
- kernel version

Thanks