RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 771419 - lvm event log messages
Summary: lvm event log messages
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Petr Rockai
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-01-03 17:11 UTC by Kevin Fenzi (rh)
Modified: 2013-10-04 00:27 UTC (History)
12 users (show)

Fixed In Version: lvm2-2.02.95-1.el6
Doc Type: Bug Fix
Doc Text:
In previous versions, when (dmeventd) monitoring of multiple snashpots was enabled, dmeventd would log redundant informative messages in the form "Another thread is handling an event. Waiting...". This needlessly flooded system log files. This behaviour has been fixed in this update.
Clone Of:
Environment:
Last Closed: 2012-06-20 15:00:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2012:0962 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2012-06-19 21:12:11 UTC

Description Kevin Fenzi (rh) 2012-01-03 17:11:24 UTC
Description of problem:

We are seeing a issue on a backup server where we get: 
Jan  3 14:27:52 backup02.vpn.fedoraproject.org lvm[1645]: Another thread is handling an event. Waiting...
Jan  3 14:28:02 backup02.vpn.fedoraproject.org lvm[1645]: Another thread is handling an event. Waiting...

Our backup script uses snapshots. It keeps 3 days of snapshots, so at any given time we have 3 snapshots and the main volume. The volume is also luks encrypted, and the machine is a kvm guest. 

  LV             VG            Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  BackupSnap-Mon BackupGroup01 swi-a-  95.00g backup  15.83                        
  BackupSnap-Sun BackupGroup01 swi-a-  95.00g backup  23.30                        
  BackupSnap-Tue BackupGroup01 swi-a-  95.00g backup   7.99                        
  backup         BackupGroup01 owi-ao 700.00g    

uname -a:

Linux backup02.fedoraproject.org 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 x86_64 x86_64 GNU/Linux

rpm -q lvm2: 

lvm2-2.02.87-6.el6.x86_64

ps axuww | grep event: 
root      1645  0.0  0.1 235816 15028 ?        S<Lsl 2011   4:54 /sbin/dmeventd

Happy to provide more info. The messages don't seem to cause any issues, they just fill logs with a lot of noise.

Comment 6 Petr Rockai 2012-01-14 13:19:33 UTC
Peter, well, we did reduce the number of redundant calls to dmeventd, but this is an entirely different problem. I think the log entry has to be removed, since any periodic dmeventd activity on multiple volumes (like monitored snapshots) will cause this problem. I'll check in a patch to get rid of the log message. It is entirely harmless, and there should be no side-effects.

Not sure this needs any QA, but you can check that creating multiple monitored snapshots leads to periodic log entries in the reported form, and that these go away after the patch.

Comment 8 Petr Rockai 2012-02-01 20:12:22 UTC
Checked in.

Comment 11 Petr Rockai 2012-04-25 11:19:19 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
In previous versions, when (dmeventd) monitoring of multiple snashpots was enabled, dmeventd would log redundant informative messages in the form "Another thread is handling an event. Waiting...". This needlessly flooded system log files. This behaviour has been fixed in this update.

Comment 12 Corey Marthaler 2012-04-25 21:10:05 UTC
Fix verified in the following rpms.


2.6.32-268.el6.x86_64
lvm2-2.02.95-6.el6    BUILT: Wed Apr 25 04:39:34 CDT 2012
lvm2-libs-2.02.95-6.el6    BUILT: Wed Apr 25 04:39:34 CDT 2012
lvm2-cluster-2.02.95-6.el6    BUILT: Wed Apr 25 04:39:34 CDT 2012
udev-147-2.41.el6    BUILT: Thu Mar  1 13:01:08 CST 2012
device-mapper-1.02.74-6.el6    BUILT: Wed Apr 25 04:39:34 CDT 2012
device-mapper-libs-1.02.74-6.el6    BUILT: Wed Apr 25 04:39:34 CDT 2012
device-mapper-event-1.02.74-6.el6    BUILT: Wed Apr 25 04:39:34 CDT 2012
device-mapper-event-libs-1.02.74-6.el6    BUILT: Wed Apr 25 04:39:34 CDT 2012
cmirror-2.02.95-6.el6    BUILT: Wed Apr 25 04:39:34 CDT 2012


SCENARIO - [dual_dmeventd_monitoring]
Have dmeventd monitoring turned on for both a mirror and snapshot at the same time
Make multiple mirrored volumes
Make multiple snapshot volumes
lvcreate -L 300M snapper -n origin
lvcreate -s /dev/snapper/origin -c 32 -n dual_snap_1 -L 100M
lvcreate -s /dev/snapper/origin -c 32 -n dual_snap_2 -L 100M
lvcreate -s /dev/snapper/origin -c 32 -n dual_snap_3 -L 100M
lvcreate -s /dev/snapper/origin -c 32 -n dual_snap_4 -L 100M
lvcreate -s /dev/snapper/origin -c 32 -n dual_snap_5 -L 100M

Deactivate and then reactivate these volumes a few times
        ...1 ...2 ...3 ...4 ...5 


[root@taft-01 ~]# grep Another /var/log/messages
[root@taft-01 ~]#

Comment 14 errata-xmlrpc 2012-06-20 15:00:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2012-0962.html

Comment 15 Red Hat Bugzilla 2013-10-04 00:27:24 UTC
Removing external tracker bug with the id 'DOC-69772' as it is not valid for this tracker


Note You need to log in before you can comment on or make changes to this bug.