Description of problem:
We are seeing a issue on a backup server where we get:
Jan 3 14:27:52 backup02.vpn.fedoraproject.org lvm: Another thread is handling an event. Waiting...
Jan 3 14:28:02 backup02.vpn.fedoraproject.org lvm: Another thread is handling an event. Waiting...
Our backup script uses snapshots. It keeps 3 days of snapshots, so at any given time we have 3 snapshots and the main volume. The volume is also luks encrypted, and the machine is a kvm guest.
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
BackupSnap-Mon BackupGroup01 swi-a- 95.00g backup 15.83
BackupSnap-Sun BackupGroup01 swi-a- 95.00g backup 23.30
BackupSnap-Tue BackupGroup01 swi-a- 95.00g backup 7.99
backup BackupGroup01 owi-ao 700.00g
Linux backup02.fedoraproject.org 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 x86_64 x86_64 GNU/Linux
rpm -q lvm2:
ps axuww | grep event:
root 1645 0.0 0.1 235816 15028 ? S<Lsl 2011 4:54 /sbin/dmeventd
Happy to provide more info. The messages don't seem to cause any issues, they just fill logs with a lot of noise.
Peter, well, we did reduce the number of redundant calls to dmeventd, but this is an entirely different problem. I think the log entry has to be removed, since any periodic dmeventd activity on multiple volumes (like monitored snapshots) will cause this problem. I'll check in a patch to get rid of the log message. It is entirely harmless, and there should be no side-effects.
Not sure this needs any QA, but you can check that creating multiple monitored snapshots leads to periodic log entries in the reported form, and that these go away after the patch.
Technical note added. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.
In previous versions, when (dmeventd) monitoring of multiple snashpots was enabled, dmeventd would log redundant informative messages in the form "Another thread is handling an event. Waiting...". This needlessly flooded system log files. This behaviour has been fixed in this update.
Fix verified in the following rpms.
lvm2-2.02.95-6.el6 BUILT: Wed Apr 25 04:39:34 CDT 2012
lvm2-libs-2.02.95-6.el6 BUILT: Wed Apr 25 04:39:34 CDT 2012
lvm2-cluster-2.02.95-6.el6 BUILT: Wed Apr 25 04:39:34 CDT 2012
udev-147-2.41.el6 BUILT: Thu Mar 1 13:01:08 CST 2012
device-mapper-1.02.74-6.el6 BUILT: Wed Apr 25 04:39:34 CDT 2012
device-mapper-libs-1.02.74-6.el6 BUILT: Wed Apr 25 04:39:34 CDT 2012
device-mapper-event-1.02.74-6.el6 BUILT: Wed Apr 25 04:39:34 CDT 2012
device-mapper-event-libs-1.02.74-6.el6 BUILT: Wed Apr 25 04:39:34 CDT 2012
cmirror-2.02.95-6.el6 BUILT: Wed Apr 25 04:39:34 CDT 2012
SCENARIO - [dual_dmeventd_monitoring]
Have dmeventd monitoring turned on for both a mirror and snapshot at the same time
Make multiple mirrored volumes
Make multiple snapshot volumes
lvcreate -L 300M snapper -n origin
lvcreate -s /dev/snapper/origin -c 32 -n dual_snap_1 -L 100M
lvcreate -s /dev/snapper/origin -c 32 -n dual_snap_2 -L 100M
lvcreate -s /dev/snapper/origin -c 32 -n dual_snap_3 -L 100M
lvcreate -s /dev/snapper/origin -c 32 -n dual_snap_4 -L 100M
lvcreate -s /dev/snapper/origin -c 32 -n dual_snap_5 -L 100M
Deactivate and then reactivate these volumes a few times
...1 ...2 ...3 ...4 ...5
[root@taft-01 ~]# grep Another /var/log/messages
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
Removing external tracker bug with the id 'DOC-69772' as it is not valid for this tracker