Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 619035 - [vdsm] [libvirt] libivrtd.log is flooded with I\O event error (80227 times)
[vdsm] [libvirt] libivrtd.log is flooded with I\O event error (80227 times)
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm (Show other bugs)
6.1
All Linux
low Severity medium
: rc
: ---
Assigned To: Dan Kenigsberg
Moran Goldboim
: RHELNAK
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-07-28 08:16 EDT by Haim
Modified: 2014-01-12 19:46 EST (History)
11 users (show)

See Also:
Fixed In Version: vdsm-4.9-12.el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2011-08-19 11:17:54 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
libvirtd log (219.44 KB, application/x-gzip)
2010-07-28 08:16 EDT, Haim
no flags Details

  None (edit)
Description Haim 2010-07-28 08:16:29 EDT
Created attachment 434990 [details]
libvirtd log

Description of problem:

libvirtd.log is flooded with 'remoteRelayDomainEventIOErrorReason' for ~80K times when guest can't reach it's block device (storage communication problems). 
please see attached log and run the following: 

grep /tmp/libvirtd.log remoteRelayDomainEventIOErrorReason /tmp/libvirtd.log |wc -l = ~80K

number of running guests: 16 

see attached log. 

libvirt-0.8.1-18.el6.x86_64
qemu-kvm-0.12.1.2-2.96.el6.x86_64
vdsm-4.9-10.el6.x86_64
2.6.32-44.1.el6.x86_64

repro steps: 

1) run about 16 vms 
2) block communication between host to storage
Comment 2 RHEL Product and Program Management 2010-07-28 08:37:59 EDT
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release.

** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **
Comment 3 Dave Allan 2010-07-28 10:34:33 EDT
Is it required that more than one guest be running to trigger the messages?
Comment 4 Daniel Berrange 2010-07-28 14:30:49 EDT
What is the guest XML ? In particular what IO error policy is set for the guests. If a policy is set to 'stop' then you should only see a handful (<10 ) reports as outstanding I/O is processed and the VM stopped. If the policy is set to 'ignore' you'll get a never ending stream of errors since the VM will continue retrying the I/O and failing.

If anything though, this is probably a QEMU bug - those log messages are at level 'debug' so I don't see any problem with them being logged if QEMU is sending them to us.
Comment 5 Dave Allan 2010-07-28 15:04:14 EDT
Given that those are debug level messages, and it seems like they represent actual events from the guest, this behavior doesn't seem like a bug to me.  I think it's fair to expect a fairly high volume of output when debug level logging is enabled, no?
Comment 6 Itamar Heim 2010-07-28 16:50:56 EDT
Haim - did you configure something to get debug messages, or is this the default mode?
Comment 7 Haim 2010-07-29 03:11:18 EDT
(In reply to comment #6)
> Haim - did you configure something to get debug messages, or is this the
> default mode?    

its vdsm default parameters (vdsm configures it). 

from vdsm.conf
log_outputs="1:file:/var/log/libvirtd.log" # by vdsm
log_filters="1:util 1:libvirt 1:qemu 1:remote" # by vdsm
Comment 8 Haim 2010-07-29 03:37:33 EDT
move to vdsm ownership, as it seem to be a problem with our disk policy for logging events.

  <devices>
                <disk device="disk" type="block">
                        <source dev="/rhev/data-center/841af73a-d3bf-4bb8-9985-0603fdcf302e/88703353-1968-4875-bdc5-604582582f22/images/84adac3a-f0dd-4542-932a
-69d3e588ba35/235ecbc8-738a-40e5-8a10-7deeb0d5eb0e"/>
                        <target bus="virtio" dev="hda"/>
                        <serial>42-932a-69d3e588ba35</serial>
                        <driver cache="none" name="qemu" type="qcow2"/>
                </disk>
                <disk device="disk" type="block">
                        <source dev="/rhev/data-center/841af73a-d3bf-4bb8-9985-0603fdcf302e/88703353-1968-4875-bdc5-604582582f22/images/a899ebee-1328-4f66-9644
-a6585e7251f6/3f66267a-14ae-4778-897e-5a4d8c26c99d"/>
                        <target bus="virtio" dev="hdb"/>
                        <serial>66-9644-a6585e7251f6</serial>
                        <driver cache="none" name="qemu" type="qcow2"/>
                </disk>
                <controller index="0" ports="16" type="virtio-serial"/>
                <channel type="unix">
                        <target name="org.linux-kvm.port.0" type="virtio"/>
                        <source mode="bind" path="/var/lib/libvirt/qemu/channels/libvirt-rhel54x-012.org.linux-kvm.port.0"/>
                </channel>
                <interface type="bridge">
                        <mac address="00:1a:4a:23:71:83"/>
                        <model type="virtio"/>
                        <source bridge="rhevm"/>
                </interface>
                <input bus="usb" type="tablet"/>
                <graphics autoport="yes" keymap="en-us" listen="0" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" type="vnc"/>
        </devices>
Comment 10 Haim 2010-08-19 04:00:02 EDT
verified; blocked communication between host and storage running with 16 guests, storage went down, data center as well, run the following: 

[root@silver-vdse x86_64]# grep remoteRelayDomainEventIOErrorReason /var/log/libvirtd.log |wc -l 
0


2.6.32-59.1.el6.x86_64
libvirt-0.8.1-25.el6.x86_64
vdsm-4.9-13.el6.x86_64
device-mapper-multipath-0.4.9-25.el6.x86_64
lvm2-2.02.72-7.el6.x86_64
qemu-kvm-0.12.1.2-2.109.el6.x86_64

Note You need to log in before you can comment on or make changes to this bug.