Bug 771419
Summary: | lvm event log messages | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Kevin Fenzi (rh) <kfenzi> |
Component: | lvm2 | Assignee: | Petr Rockai <prockai> |
Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 6.2 | CC: | agk, amote, cmarthal, dwysocha, heinzm, jbrassow, kevin, mbroz, prajnoha, prockai, thornber, zkabelac |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | lvm2-2.02.95-1.el6 | Doc Type: | Bug Fix |
Doc Text: |
In previous versions, when (dmeventd) monitoring of multiple snashpots was enabled, dmeventd would log redundant informative messages in the form "Another thread is handling an event. Waiting...". This needlessly flooded system log files. This behaviour has been fixed in this update.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2012-06-20 15:00:46 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Kevin Fenzi (rh)
2012-01-03 17:11:24 UTC
Peter, well, we did reduce the number of redundant calls to dmeventd, but this is an entirely different problem. I think the log entry has to be removed, since any periodic dmeventd activity on multiple volumes (like monitored snapshots) will cause this problem. I'll check in a patch to get rid of the log message. It is entirely harmless, and there should be no side-effects. Not sure this needs any QA, but you can check that creating multiple monitored snapshots leads to periodic log entries in the reported form, and that these go away after the patch. Checked in. Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: In previous versions, when (dmeventd) monitoring of multiple snashpots was enabled, dmeventd would log redundant informative messages in the form "Another thread is handling an event. Waiting...". This needlessly flooded system log files. This behaviour has been fixed in this update. Fix verified in the following rpms. 2.6.32-268.el6.x86_64 lvm2-2.02.95-6.el6 BUILT: Wed Apr 25 04:39:34 CDT 2012 lvm2-libs-2.02.95-6.el6 BUILT: Wed Apr 25 04:39:34 CDT 2012 lvm2-cluster-2.02.95-6.el6 BUILT: Wed Apr 25 04:39:34 CDT 2012 udev-147-2.41.el6 BUILT: Thu Mar 1 13:01:08 CST 2012 device-mapper-1.02.74-6.el6 BUILT: Wed Apr 25 04:39:34 CDT 2012 device-mapper-libs-1.02.74-6.el6 BUILT: Wed Apr 25 04:39:34 CDT 2012 device-mapper-event-1.02.74-6.el6 BUILT: Wed Apr 25 04:39:34 CDT 2012 device-mapper-event-libs-1.02.74-6.el6 BUILT: Wed Apr 25 04:39:34 CDT 2012 cmirror-2.02.95-6.el6 BUILT: Wed Apr 25 04:39:34 CDT 2012 SCENARIO - [dual_dmeventd_monitoring] Have dmeventd monitoring turned on for both a mirror and snapshot at the same time Make multiple mirrored volumes Make multiple snapshot volumes lvcreate -L 300M snapper -n origin lvcreate -s /dev/snapper/origin -c 32 -n dual_snap_1 -L 100M lvcreate -s /dev/snapper/origin -c 32 -n dual_snap_2 -L 100M lvcreate -s /dev/snapper/origin -c 32 -n dual_snap_3 -L 100M lvcreate -s /dev/snapper/origin -c 32 -n dual_snap_4 -L 100M lvcreate -s /dev/snapper/origin -c 32 -n dual_snap_5 -L 100M Deactivate and then reactivate these volumes a few times ...1 ...2 ...3 ...4 ...5 [root@taft-01 ~]# grep Another /var/log/messages [root@taft-01 ~]# Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2012-0962.html Removing external tracker bug with the id 'DOC-69772' as it is not valid for this tracker |