RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 619035 - [vdsm] [libvirt] libivrtd.log is flooded with I\O event error (80227 times)
Summary: [vdsm] [libvirt] libivrtd.log is flooded with I\O event error (80227 times)
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm
Version: 6.1
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Dan Kenigsberg
QA Contact: Moran Goldboim
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-07-28 12:16 UTC by Haim
Modified: 2014-01-13 00:46 UTC (History)
11 users (show)

Fixed In Version: vdsm-4.9-12.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-08-19 15:17:54 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
libvirtd log (219.44 KB, application/x-gzip)
2010-07-28 12:16 UTC, Haim
no flags Details

Description Haim 2010-07-28 12:16:29 UTC
Created attachment 434990 [details]
libvirtd log

Description of problem:

libvirtd.log is flooded with 'remoteRelayDomainEventIOErrorReason' for ~80K times when guest can't reach it's block device (storage communication problems). 
please see attached log and run the following: 

grep /tmp/libvirtd.log remoteRelayDomainEventIOErrorReason /tmp/libvirtd.log |wc -l = ~80K

number of running guests: 16 

see attached log. 

libvirt-0.8.1-18.el6.x86_64
qemu-kvm-0.12.1.2-2.96.el6.x86_64
vdsm-4.9-10.el6.x86_64
2.6.32-44.1.el6.x86_64

repro steps: 

1) run about 16 vms 
2) block communication between host to storage

Comment 2 RHEL Program Management 2010-07-28 12:37:59 UTC
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release.

** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **

Comment 3 Dave Allan 2010-07-28 14:34:33 UTC
Is it required that more than one guest be running to trigger the messages?

Comment 4 Daniel Berrangé 2010-07-28 18:30:49 UTC
What is the guest XML ? In particular what IO error policy is set for the guests. If a policy is set to 'stop' then you should only see a handful (<10 ) reports as outstanding I/O is processed and the VM stopped. If the policy is set to 'ignore' you'll get a never ending stream of errors since the VM will continue retrying the I/O and failing.

If anything though, this is probably a QEMU bug - those log messages are at level 'debug' so I don't see any problem with them being logged if QEMU is sending them to us.

Comment 5 Dave Allan 2010-07-28 19:04:14 UTC
Given that those are debug level messages, and it seems like they represent actual events from the guest, this behavior doesn't seem like a bug to me.  I think it's fair to expect a fairly high volume of output when debug level logging is enabled, no?

Comment 6 Itamar Heim 2010-07-28 20:50:56 UTC
Haim - did you configure something to get debug messages, or is this the default mode?

Comment 7 Haim 2010-07-29 07:11:18 UTC
(In reply to comment #6)
> Haim - did you configure something to get debug messages, or is this the
> default mode?    

its vdsm default parameters (vdsm configures it). 

from vdsm.conf
log_outputs="1:file:/var/log/libvirtd.log" # by vdsm
log_filters="1:util 1:libvirt 1:qemu 1:remote" # by vdsm

Comment 8 Haim 2010-07-29 07:37:33 UTC
move to vdsm ownership, as it seem to be a problem with our disk policy for logging events.

  <devices>
                <disk device="disk" type="block">
                        <source dev="/rhev/data-center/841af73a-d3bf-4bb8-9985-0603fdcf302e/88703353-1968-4875-bdc5-604582582f22/images/84adac3a-f0dd-4542-932a
-69d3e588ba35/235ecbc8-738a-40e5-8a10-7deeb0d5eb0e"/>
                        <target bus="virtio" dev="hda"/>
                        <serial>42-932a-69d3e588ba35</serial>
                        <driver cache="none" name="qemu" type="qcow2"/>
                </disk>
                <disk device="disk" type="block">
                        <source dev="/rhev/data-center/841af73a-d3bf-4bb8-9985-0603fdcf302e/88703353-1968-4875-bdc5-604582582f22/images/a899ebee-1328-4f66-9644
-a6585e7251f6/3f66267a-14ae-4778-897e-5a4d8c26c99d"/>
                        <target bus="virtio" dev="hdb"/>
                        <serial>66-9644-a6585e7251f6</serial>
                        <driver cache="none" name="qemu" type="qcow2"/>
                </disk>
                <controller index="0" ports="16" type="virtio-serial"/>
                <channel type="unix">
                        <target name="org.linux-kvm.port.0" type="virtio"/>
                        <source mode="bind" path="/var/lib/libvirt/qemu/channels/libvirt-rhel54x-012.org.linux-kvm.port.0"/>
                </channel>
                <interface type="bridge">
                        <mac address="00:1a:4a:23:71:83"/>
                        <model type="virtio"/>
                        <source bridge="rhevm"/>
                </interface>
                <input bus="usb" type="tablet"/>
                <graphics autoport="yes" keymap="en-us" listen="0" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" type="vnc"/>
        </devices>

Comment 10 Haim 2010-08-19 08:00:02 UTC
verified; blocked communication between host and storage running with 16 guests, storage went down, data center as well, run the following: 

[root@silver-vdse x86_64]# grep remoteRelayDomainEventIOErrorReason /var/log/libvirtd.log |wc -l 
0


2.6.32-59.1.el6.x86_64
libvirt-0.8.1-25.el6.x86_64
vdsm-4.9-13.el6.x86_64
device-mapper-multipath-0.4.9-25.el6.x86_64
lvm2-2.02.72-7.el6.x86_64
qemu-kvm-0.12.1.2-2.109.el6.x86_64


Note You need to log in before you can comment on or make changes to this bug.