Bug 619035 - [vdsm] [libvirt] libivrtd.log is flooded with I\O event error (80227 times)
Summary: [vdsm] [libvirt] libivrtd.log is flooded with I\O event error (80227 times)
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm (Show other bugs)
(Show other bugs)
Version: 6.1
Hardware: All Linux
Target Milestone: rc
: ---
Assignee: Dan Kenigsberg
QA Contact: Moran Goldboim
Keywords: RHELNAK
Depends On:
TreeView+ depends on / blocked
Reported: 2010-07-28 12:16 UTC by Haim
Modified: 2014-01-13 00:46 UTC (History)
11 users (show)

Fixed In Version: vdsm-4.9-12.el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2011-08-19 15:17:54 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
libvirtd log (219.44 KB, application/x-gzip)
2010-07-28 12:16 UTC, Haim
no flags Details

Description Haim 2010-07-28 12:16:29 UTC
Created attachment 434990 [details]
libvirtd log

Description of problem:

libvirtd.log is flooded with 'remoteRelayDomainEventIOErrorReason' for ~80K times when guest can't reach it's block device (storage communication problems). 
please see attached log and run the following: 

grep /tmp/libvirtd.log remoteRelayDomainEventIOErrorReason /tmp/libvirtd.log |wc -l = ~80K

number of running guests: 16 

see attached log. 


repro steps: 

1) run about 16 vms 
2) block communication between host to storage

Comment 2 RHEL Product and Program Management 2010-07-28 12:37:59 UTC
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release.

** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **

Comment 3 Dave Allan 2010-07-28 14:34:33 UTC
Is it required that more than one guest be running to trigger the messages?

Comment 4 Daniel Berrange 2010-07-28 18:30:49 UTC
What is the guest XML ? In particular what IO error policy is set for the guests. If a policy is set to 'stop' then you should only see a handful (<10 ) reports as outstanding I/O is processed and the VM stopped. If the policy is set to 'ignore' you'll get a never ending stream of errors since the VM will continue retrying the I/O and failing.

If anything though, this is probably a QEMU bug - those log messages are at level 'debug' so I don't see any problem with them being logged if QEMU is sending them to us.

Comment 5 Dave Allan 2010-07-28 19:04:14 UTC
Given that those are debug level messages, and it seems like they represent actual events from the guest, this behavior doesn't seem like a bug to me.  I think it's fair to expect a fairly high volume of output when debug level logging is enabled, no?

Comment 6 Itamar Heim 2010-07-28 20:50:56 UTC
Haim - did you configure something to get debug messages, or is this the default mode?

Comment 7 Haim 2010-07-29 07:11:18 UTC
(In reply to comment #6)
> Haim - did you configure something to get debug messages, or is this the
> default mode?    

its vdsm default parameters (vdsm configures it). 

from vdsm.conf
log_outputs="1:file:/var/log/libvirtd.log" # by vdsm
log_filters="1:util 1:libvirt 1:qemu 1:remote" # by vdsm

Comment 8 Haim 2010-07-29 07:37:33 UTC
move to vdsm ownership, as it seem to be a problem with our disk policy for logging events.

                <disk device="disk" type="block">
                        <source dev="/rhev/data-center/841af73a-d3bf-4bb8-9985-0603fdcf302e/88703353-1968-4875-bdc5-604582582f22/images/84adac3a-f0dd-4542-932a
                        <target bus="virtio" dev="hda"/>
                        <driver cache="none" name="qemu" type="qcow2"/>
                <disk device="disk" type="block">
                        <source dev="/rhev/data-center/841af73a-d3bf-4bb8-9985-0603fdcf302e/88703353-1968-4875-bdc5-604582582f22/images/a899ebee-1328-4f66-9644
                        <target bus="virtio" dev="hdb"/>
                        <driver cache="none" name="qemu" type="qcow2"/>
                <controller index="0" ports="16" type="virtio-serial"/>
                <channel type="unix">
                        <target name="org.linux-kvm.port.0" type="virtio"/>
                        <source mode="bind" path="/var/lib/libvirt/qemu/channels/libvirt-rhel54x-012.org.linux-kvm.port.0"/>
                <interface type="bridge">
                        <mac address="00:1a:4a:23:71:83"/>
                        <model type="virtio"/>
                        <source bridge="rhevm"/>
                <input bus="usb" type="tablet"/>
                <graphics autoport="yes" keymap="en-us" listen="0" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" type="vnc"/>

Comment 10 Haim 2010-08-19 08:00:02 UTC
verified; blocked communication between host and storage running with 16 guests, storage went down, data center as well, run the following: 

[root@silver-vdse x86_64]# grep remoteRelayDomainEventIOErrorReason /var/log/libvirtd.log |wc -l 


Note You need to log in before you can comment on or make changes to this bug.