RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1060737 - The dm-event.service reload goes wrong - it passes failed state and completely new instance is used in the end
Summary: The dm-event.service reload goes wrong - it passes failed state and completel...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.0
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-02-03 13:34 UTC by Peter Rajnoha
Modified: 2023-03-08 07:26 UTC (History)
17 users (show)

Fixed In Version: lvm2-2.02.105-4.el7
Doc Type: Bug Fix
Doc Text:
Clone Of: 1060134
Environment:
Last Closed: 2014-06-13 11:18:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Peter Rajnoha 2014-02-03 13:34:44 UTC
This applies to RHEL7 as well...

+++ This bug was initially created as a clone of Bug #1060134 +++

When trying to reload dm-event.service, all looks OK from the high-level status:

[0] raw/~ # systemctl status dm-event
dm-event.service - Device-mapper event daemon
   Loaded: loaded (/usr/lib/systemd/system/dm-event.service; disabled)
   Active: active (running) since Fri 2014-01-31 11:40:32 CET; 24s ago
     Docs: man:dmeventd(8)
  Process: 604 ExecStart=/usr/sbin/dmeventd (code=exited, status=0/SUCCESS)
 Main PID: 605 (dmeventd)
   CGroup: /system.slice/dm-event.service
           `-605 /usr/sbin/dmeventd

Jan 31 11:40:32 raw.virt dmeventd[605]: dmeventd ready for processing.
Jan 31 11:40:32 raw.virt lvm[605]: Monitoring snapshot vg-lvol1

[0] raw/~ # systemctl reload dm-event

[0] raw/~ # systemctl status dm-event
dm-event.service - Device-mapper event daemon
   Loaded: loaded (/usr/lib/systemd/system/dm-event.service; disabled)
   Active: active (running) since Fri 2014-01-31 11:41:14 CET; 4s ago
     Docs: man:dmeventd(8)
  Process: 1198 ExecReload=/usr/sbin/dmeventd -R (code=exited, status=0/SUCCESS)
  Process: 1205 ExecStart=/usr/sbin/dmeventd (code=exited, status=0/SUCCESS)
 Main PID: 1206 (dmeventd)
   CGroup: /system.slice/dm-event.service
           `-1206 /usr/sbin/dmeventd

Jan 31 11:41:14 raw.virt dmeventd[1206]: dmeventd ready for processing.



But looking in more detail in the systemd debug log, we can see things are not going as expected:

Jan 31 11:40:30 raw.virt systemd[1]: Installed new job dm-event.socket/start as 128
...
Jan 31 11:40:30 raw.virt systemd[1]: dm-event.socket changed dead -> listening
Jan 31 11:40:30 raw.virt systemd[1]: Job dm-event.socket/start finished, result=done
Jan 31 11:40:30 raw.virt systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 31 11:40:30 raw.virt systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 31 11:40:30 raw.virt systemd[1]: About to execute: /usr/sbin/lvm vgchange --monitor
...
Jan 31 11:40:32 raw.virt systemd[1]: Incoming traffic on dm-event.socket
Jan 31 11:40:32 raw.virt systemd[1]: Trying to enqueue job dm-event.service/start/replace
Jan 31 11:40:32 raw.virt systemd[1]: Installed new job dm-event.service/start as 275
Jan 31 11:40:32 raw.virt systemd[1]: Enqueued job dm-event.service/start as 275
Jan 31 11:40:32 raw.virt systemd[1]: dm-event.socket changed listening -> running
Jan 31 11:40:32 raw.virt systemd[1]: Starting Device-mapper event daemon...
Jan 31 11:40:32 raw.virt systemd[1]: About to execute: /usr/sbin/dmeventd
Jan 31 11:40:32 raw.virt systemd[1]: Forked /usr/sbin/dmeventd as 604
Jan 31 11:40:32 raw.virt systemd[1]: dm-event.service changed dead -> start
Jan 31 11:40:32 raw.virt systemd[604]: Executing: /usr/sbin/dmeventd
Jan 31 11:40:32 raw.virt systemd[1]: Received SIGCHLD from PID 604 (dmeventd).
Jan 31 11:40:32 raw.virt systemd[1]: Got SIGCHLD for process 604 (dmeventd)
Jan 31 11:40:32 raw.virt systemd[1]: Child 604 died (code=exited, status=0/SUCCESS)
Jan 31 11:40:32 raw.virt systemd[1]: Child 604 belongs to dm-event.service
Jan 31 11:40:32 raw.virt systemd[1]: dm-event.service: control process exited, code=exited status=0
Jan 31 11:40:32 raw.virt systemd[1]: dm-event.service got final SIGCHLD for state start
Jan 31 11:40:32 raw.virt systemd[1]: Main PID loaded: 605
Jan 31 11:40:32 raw.virt systemd[1]: dm-event.service changed start -> running
Jan 31 11:40:32 raw.virt systemd[1]: Job dm-event.service/start finished, result=done
Jan 31 11:40:32 raw.virt systemd[1]: Started Device-mapper event daemon.
...

  ^^ this part is OK - dm-event.service automatically activated on boot from the dm-event.socket unit


And below is the "reload" action that goes wrong:

Jan 31 11:41:08 raw.virt systemd[1]: Trying to enqueue job dm-event.service/reload/replace
Jan 31 11:41:08 raw.virt systemd[1]: Installed new job dm-event.service/reload as 668
Jan 31 11:41:08 raw.virt systemd[1]: Enqueued job dm-event.service/reload as 668
Jan 31 11:41:08 raw.virt systemd[1]: Reloading Device-mapper event daemon.
Jan 31 11:41:08 raw.virt systemd[1]: About to execute: /usr/sbin/dmeventd -R
Jan 31 11:41:08 raw.virt systemd[1]: Forked /usr/sbin/dmeventd as 1198
Jan 31 11:41:08 raw.virt systemd[1]: dm-event.service changed running -> reload
...
Jan 31 11:41:08 raw.virt systemd[1]: Received SIGCHLD from PID 605 (dmeventd).
Jan 31 11:41:08 raw.virt systemd[1]: Got SIGCHLD for process 605 (dmeventd)
Jan 31 11:41:08 raw.virt systemd[1]: Child 605 died (code=killed, status=9/KILL)
Jan 31 11:41:08 raw.virt systemd[1]: Child 605 belongs to dm-event.service
Jan 31 11:41:08 raw.virt systemd[1]: dm-event.service: main process exited, code=killed, status=9/KILL
Jan 31 11:41:14 raw.virt dmeventd[1198]: No input from event server.
Jan 31 11:41:14 raw.virt dmeventd[1199]: dmeventd ready for processing.
Jan 31 11:41:14 raw.virt systemd[1]: Received SIGCHLD from PID 1198 (dmeventd).
Jan 31 11:41:14 raw.virt systemd[1]: Got SIGCHLD for process 1198 (dmeventd)
Jan 31 11:41:14 raw.virt systemd[1]: Child 1198 died (code=exited, status=0/SUCCESS)
Jan 31 11:41:14 raw.virt systemd[1]: Child 1198 belongs to dm-event.service
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.service: control process exited, code=exited status=0
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.service got final SIGCHLD for state reload
Jan 31 11:41:14 raw.virt systemd[1]: Main PID changing: 0 -> 1199
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.service changed reload -> running
Jan 31 11:41:14 raw.virt systemd[1]: Job dm-event.service/reload finished, result=done
Jan 31 11:41:14 raw.virt systemd[1]: Reloaded Device-mapper event daemon.
...
Jan 31 11:41:14 raw.virt systemd[1]: Received SIGCHLD from PID 1199 (dmeventd).
Jan 31 11:41:14 raw.virt systemd[1]: Got SIGCHLD for process 1199 (dmeventd)
Jan 31 11:41:14 raw.virt systemd[1]: Child 1199 died (code=killed, status=9/KILL)
Jan 31 11:41:14 raw.virt systemd[1]: Child 1199 belongs to dm-event.service
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.service: main process exited, code=killed, status=9/KILL
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.service changed running -> failed
Jan 31 11:41:14 raw.virt systemd[1]: Unit dm-event.service entered failed state.
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.socket got notified about service death (failed permanently: no)
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.socket changed running -> listening
Jan 31 11:41:14 raw.virt systemd[1]: Incoming traffic on dm-event.socket
Jan 31 11:41:14 raw.virt systemd[1]: Trying to enqueue job dm-event.service/start/replace
Jan 31 11:41:14 raw.virt systemd[1]: Installed new job dm-event.service/start as 669
Jan 31 11:41:14 raw.virt systemd[1]: Enqueued job dm-event.service/start as 669
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.socket changed listening -> running
Jan 31 11:41:14 raw.virt systemd[1]: Starting Device-mapper event daemon...
Jan 31 11:41:14 raw.virt systemd[1]: About to execute: /usr/sbin/dmeventd
Jan 31 11:41:14 raw.virt systemd[1]: Forked /usr/sbin/dmeventd as 1202
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.service changed failed -> start
...
Jan 31 11:41:14 raw.virt systemd[1202]: Executing: /usr/sbin/dmeventd
...
Jan 31 11:41:14 raw.virt dmeventd[1203]: dmeventd ready for processing.
Jan 31 11:41:14 raw.virt systemd[1]: Received SIGCHLD from PID 1202 (dmeventd).
Jan 31 11:41:14 raw.virt systemd[1]: Got SIGCHLD for process 1202 (dmeventd)
Jan 31 11:41:14 raw.virt systemd[1]: Child 1202 died (code=exited, status=0/SUCCESS)
Jan 31 11:41:14 raw.virt systemd[1]: Child 1202 belongs to dm-event.service
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.service: control process exited, code=exited status=0
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.service got final SIGCHLD for state start
Jan 31 11:41:14 raw.virt systemd[1]: PID 1203 read from file /run/dmeventd.pid does not exist.
Jan 31 11:41:14 raw.virt systemd[1]: Setting watch for dm-event.service's PID file /run/dmeventd.pid
Jan 31 11:41:14 raw.virt systemd[1]: Trying to read dm-event.service's PID file /run/dmeventd.pid in case it changed
Jan 31 11:41:14 raw.virt systemd[1]: Stopping watch for dm-event.service's PID file /run/dmeventd.pid
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.service changed start -> failed
Jan 31 11:41:14 raw.virt systemd[1]: Job dm-event.service/start finished, result=failed
Jan 31 11:41:14 raw.virt systemd[1]: Failed to start Device-mapper event daemon.
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.socket got notified about service death (failed permanently: no)
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.socket changed running -> listening
...
Jan 31 11:41:14 raw.virt systemd[1]: Unit dm-event.service entered failed state.
Jan 31 11:41:14 raw.virt systemd[1]: Incoming traffic on dm-event.socket
Jan 31 11:41:14 raw.virt systemd[1]: Trying to enqueue job dm-event.service/start/replace
Jan 31 11:41:14 raw.virt systemd[1]: Installed new job dm-event.service/start as 674
Jan 31 11:41:14 raw.virt systemd[1]: Enqueued job dm-event.service/start as 674
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.socket changed listening -> running
Jan 31 11:41:14 raw.virt systemd[1]: Starting Device-mapper event daemon...
Jan 31 11:41:14 raw.virt systemd[1]: About to execute: /usr/sbin/dmeventd
Jan 31 11:41:14 raw.virt systemd[1]: Forked /usr/sbin/dmeventd as 1205
Jan 31 11:41:14 raw.virt systemd[1205]: Executing: /usr/sbin/dmeventd
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.service changed failed -> start
...
Jan 31 11:41:14 raw.virt dmeventd[1206]: dmeventd ready for processing.
Jan 31 11:41:14 raw.virt systemd[1]: Received SIGCHLD from PID 1205 (dmeventd).
Jan 31 11:41:14 raw.virt systemd[1]: Got SIGCHLD for process 1205 (dmeventd)
Jan 31 11:41:14 raw.virt systemd[1]: Child 1205 died (code=exited, status=0/SUCCESS)
Jan 31 11:41:14 raw.virt systemd[1]: Child 1205 belongs to dm-event.service
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.service: control process exited, code=exited status=0
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.service got final SIGCHLD for state start
Jan 31 11:41:14 raw.virt systemd[1]: Main PID loaded: 1206
Jan 31 11:41:14 raw.virt systemd[1]: dm-event.service changed start -> running
Jan 31 11:41:14 raw.virt systemd[1]: Job dm-event.service/start finished, result=done
Jan 31 11:41:14 raw.virt systemd[1]: Started Device-mapper event daemon.


As can be seen from the log above, the dm-event.service unit even passes "failed" state and then it's instantiated again only because the dm-event.socket unit triggers a *completely new* instance (just because the FIFO is accessed). This is normally not visible in simple "systemctl status" as indicated at the beginning of this comment.

--- Additional comment from Peter Rajnoha on 2014-01-31 13:37:46 CET ---

I think this could work for the dmeventd restart in systemd environment:

  - dmeventd -R calls the "restart" fn
  - we get the state from the old dmeventd
  - we order the old dmeventd to die
  
  (up until here, the procedure is the same as it was before)

  - if under systemd management, send request to register all the original event registrations directly to the fifo
  - this instantiates a new dmeventd which will read the initial registrations right after it's started
  - the dmeventd -R call finishes
  - the newly instantiated dmeventd continues....

--- Additional comment from Peter Rajnoha on 2014-02-03 14:33:22 CET ---

The direct outcome of this bug is that we lose existing event monitoring on dmeventd daemon restart (that normally happens on package update!).

Comment 3 Peter Rajnoha 2014-02-13 10:17:25 UTC
To QA:

to test this fix:
  - upgrade to new build including the fix
  - create some volumes that are monitored, e.g. snapshots, mirrors, thin pools
  - check dm-event status by calling "systemctl status dm-event" and check the "Main PID" and log messages where you should see that the volumes are monitored, check the time for reference when the volumes were registered for monitoring
  - run dmeventd -R to restart the daemon
  - check "systemctl status dm-event" again, the "Main PID" should change (as this is a new instance of dmeventd) and previously monitored devices should be reregistered for monitoring (check the log where you'll see the time has advanced which means that the monitoring registrations were reentered for new dmeventd instance).


[0] rhel7-a/~ # systemctl status dm-event
dm-event.service - Device-mapper event daemon
   Loaded: loaded (/usr/lib/systemd/system/dm-event.service; disabled)
   Active: active (running) since Thu 2014-02-13 05:11:11 EST; 18s ago
     Docs: man:dmeventd(8)
 Main PID: 677 (dmeventd)
   CGroup: /system.slice/dm-event.service
           `-677 /usr/sbin/dmeventd -f

Feb 13 05:11:11 rhel7-a.virt systemd[1]: Starting Device-mapper event daemon...
Feb 13 05:11:11 rhel7-a.virt systemd[1]: Started Device-mapper event daemon.
Feb 13 05:11:11 rhel7-a.virt dmeventd[677]: dmeventd ready for processing.
Feb 13 05:11:11 rhel7-a.virt lvm[677]: Monitoring snapshot vg-lvol1
Feb 13 05:11:11 rhel7-a.virt lvm[677]: Monitoring RAID device vg-lvol2 for events.
Feb 13 05:11:11 rhel7-a.virt lvm[677]: Monitoring thin vg-pool-tpool.
[0] rhel7-a/~ # systemctl reload dm-event.service
Failed to issue method call: Job type reload is not applicable for unit dm-event.service.
[0] rhel7-a/~ # dmeventd -R
[0] rhel7-a/~ # systemctl status dm-event
dm-event.service - Device-mapper event daemon
   Loaded: loaded (/usr/lib/systemd/system/dm-event.service; disabled)
   Active: active (running) since Thu 2014-02-13 05:11:54 EST; 3s ago
     Docs: man:dmeventd(8)
 Main PID: 1774 (dmeventd)
   CGroup: /system.slice/dm-event.service
           `-1774 /usr/sbin/dmeventd -f

Feb 13 05:11:54 rhel7-a.virt systemd[1]: Starting Device-mapper event daemon...
Feb 13 05:11:54 rhel7-a.virt systemd[1]: Started Device-mapper event daemon.
Feb 13 05:11:54 rhel7-a.virt dmeventd[1774]: dmeventd ready for processing.
Feb 13 05:11:54 rhel7-a.virt lvm[1774]: Monitoring snapshot vg-lvol1
Feb 13 05:11:54 rhel7-a.virt lvm[1774]: Monitoring RAID device vg-lvol2 for events.
Feb 13 05:11:54 rhel7-a.virt lvm[1774]: Monitoring thin vg-pool-tpool.

Comment 4 Peter Rajnoha 2014-02-13 10:19:01 UTC
(In reply to Peter Rajnoha from comment #3)
> [0] rhel7-a/~ # systemctl reload dm-event.service
> Failed to issue method call: Job type reload is not applicable for unit
> dm-event.service.

Also, you can check that the old "systemctl reload dm-event.service" is not applicable now (as it didn't work correctly, we need to call dmeventd -R instead).

Comment 6 Corey Marthaler 2014-03-26 21:04:17 UTC
Fix verified in the latest rpms.


3.10.0-113.el7.x86_64
lvm2-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
lvm2-libs-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
lvm2-cluster-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-libs-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-event-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-event-libs-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-persistent-data-0.2.8-5.el7    BUILT: Fri Feb 28 19:15:56 CST 2014
cmirror-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014



[root@host-032 ~]# lvcreate -m 1 -n mirror -L 100M VG
  Logical volume "mirror" created
[root@host-032 ~]# lvcreate -s VG/mirror -n snap1 -L 50M
  Rounding up size to full physical extent 52.00 MiB
  Logical volume "snap1" created
[root@host-032 ~]# lvcreate -s VG/mirror -n snap2 -L 50M
  Rounding up size to full physical extent 52.00 MiB
  Logical volume "snap2" created
[root@host-032 ~]# lvcreate --type raid1 -m 1 -L 100M -n raid VG
  Logical volume "raid" created
[root@host-032 ~]# lvcreate --type raid1 -m 1 -L 100M -n meta VG
  Logical volume "meta" created
[root@host-032 ~]# lvconvert --thinpool VG/raid --poolmetadata meta
  Logical volume "lvol0" created
  Converted VG/raid to thin pool.
[root@host-032 ~]# lvs -a -o +devices
  LV                    VG Attr       LSize   Pool Origin Data%  Cpy%Sync Devices
  [lvol0_pmspare]       VG ewi------- 100.00m                             /dev/sda1(104)
  mirror                VG owi-a-r--- 100.00m                      100.00 mirror_rimage_0(0),mirror_rimage_1(0)        
  [mirror_rimage_0]     VG iwi-aor--- 100.00m                             /dev/sda1(1)
  [mirror_rimage_1]     VG iwi-aor--- 100.00m                             /dev/sdb1(1)
  [mirror_rmeta_0]      VG ewi-aor---   4.00m                             /dev/sda1(0)
  [mirror_rmeta_1]      VG ewi-aor---   4.00m                             /dev/sdb1(0)
  raid                  VG twi-a-tz-- 100.00m               0.00          raid_tdata(0)
  [raid_tdata]          VG rwi-aor--- 100.00m                      100.00 raid_tdata_rimage_0(0),raid_tdata_rimage_1(0)
  [raid_tdata_rimage_0] VG iwi-aor--- 100.00m                             /dev/sda1(53)
  [raid_tdata_rimage_1] VG iwi-aor--- 100.00m                             /dev/sdb1(27)
  [raid_tdata_rmeta_0]  VG ewi-aor---   4.00m                             /dev/sda1(52)
  [raid_tdata_rmeta_1]  VG ewi-aor---   4.00m                             /dev/sdb1(26)
  [raid_tmeta]          VG ewi-aor--- 100.00m                      100.00 raid_tmeta_rimage_0(0),raid_tmeta_rimage_1(0)
  [raid_tmeta_rimage_0] VG iwi-aor--- 100.00m                             /dev/sda1(79)
  [raid_tmeta_rimage_1] VG iwi-aor--- 100.00m                             /dev/sdb1(53)
  [raid_tmeta_rmeta_0]  VG ewi-aor---   4.00m                             /dev/sda1(78)
  [raid_tmeta_rmeta_1]  VG ewi-aor---   4.00m                             /dev/sdb1(52)
  snap1                 VG swi-a-s---  52.00m      mirror   0.00          /dev/sda1(26)
  snap2                 VG swi-a-s---  52.00m      mirror   0.00          /dev/sda1(39)

[root@host-032 ~]# systemctl status dm-event
dm-event.service - Device-mapper event daemon
   Loaded: loaded (/usr/lib/systemd/system/dm-event.service; disabled)
   Active: active (running) since Wed 2014-03-26 15:53:39 CDT; 3min 41s ago
     Docs: man:dmeventd(8)
 Main PID: 2241 (dmeventd)
   CGroup: /system.slice/dm-event.service
           └─2241 /usr/sbin/dmeventd -f

Mar 26 15:55:20 host-032.virt.lab.msp.redhat.com lvm[2241]: raid1 array, VG-raid, is now in-sync.
Mar 26 15:55:42 host-032.virt.lab.msp.redhat.com lvm[2241]: Monitoring RAID device VG-meta for events.
Mar 26 15:55:48 host-032.virt.lab.msp.redhat.com lvm[2241]: raid1 array, VG-meta, is now in-sync.
Mar 26 15:56:23 host-032.virt.lab.msp.redhat.com lvm[2241]: No longer monitoring RAID device VG-raid for events.
Mar 26 15:56:23 host-032.virt.lab.msp.redhat.com lvm[2241]: No longer monitoring RAID device VG-meta for events.
Mar 26 15:56:23 host-032.virt.lab.msp.redhat.com lvm[2241]: Monitoring RAID device VG-meta for events.
Mar 26 15:56:23 host-032.virt.lab.msp.redhat.com lvm[2241]: No longer monitoring RAID device VG-meta for events.
Mar 26 15:56:24 host-032.virt.lab.msp.redhat.com lvm[2241]: Monitoring RAID device VG-raid_tdata for events.
Mar 26 15:56:24 host-032.virt.lab.msp.redhat.com lvm[2241]: Monitoring RAID device VG-raid_tmeta for events.
Mar 26 15:56:24 host-032.virt.lab.msp.redhat.com lvm[2241]: Monitoring thin VG-raid-tpool.

[root@host-032 ~]# dmeventd -R

[root@host-032 ~]# systemctl status dm-event
dm-event.service - Device-mapper event daemon
   Loaded: loaded (/usr/lib/systemd/system/dm-event.service; disabled)
   Active: active (running) since Wed 2014-03-26 15:58:01 CDT; 8s ago
     Docs: man:dmeventd(8)
 Main PID: 2671 (dmeventd)
   CGroup: /system.slice/dm-event.service
           └─2671 /usr/sbin/dmeventd -f

Mar 26 15:58:01 host-032.virt.lab.msp.redhat.com systemd[1]: Starting Device-mapper event daemon...
Mar 26 15:58:01 host-032.virt.lab.msp.redhat.com systemd[1]: Started Device-mapper event daemon.
Mar 26 15:58:01 host-032.virt.lab.msp.redhat.com dmeventd[2671]: dmeventd ready for processing.
Mar 26 15:58:01 host-032.virt.lab.msp.redhat.com lvm[2671]: Monitoring snapshot VG-snap1
Mar 26 15:58:01 host-032.virt.lab.msp.redhat.com lvm[2671]: Monitoring snapshot VG-snap2
Mar 26 15:58:01 host-032.virt.lab.msp.redhat.com lvm[2671]: Monitoring RAID device VG-mirror-real for events.
Mar 26 15:58:01 host-032.virt.lab.msp.redhat.com lvm[2671]: Monitoring RAID device VG-raid_tdata for events.
Mar 26 15:58:01 host-032.virt.lab.msp.redhat.com lvm[2671]: Monitoring RAID device VG-raid_tmeta for events.
Mar 26 15:58:01 host-032.virt.lab.msp.redhat.com lvm[2671]: Monitoring thin VG-raid-tpool.

[root@host-032 ~]# systemctl reload dm-event.service
Failed to issue method call: Job type reload is not applicable for unit dm-event.service.

Comment 7 Ludek Smid 2014-06-13 11:18:31 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.


Note You need to log in before you can comment on or make changes to this bug.