RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1569431 - lvm2-monitor.service fails if there are any exported volume groups
Summary: lvm2-monitor.service fails if there are any exported volume groups
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.4
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1711360
TreeView+ depends on / blocked
 
Reported: 2018-04-19 09:29 UTC by pratapsingh
Modified: 2023-09-07 19:07 UTC (History)
17 users (show)

Fixed In Version: lvm2-2.02.186-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-31 20:04:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4820191 0 None None None 2020-02-12 07:16:51 UTC
Red Hat Product Errata RHBA-2020:1129 0 None None None 2020-03-31 20:05:27 UTC

Description pratapsingh 2018-04-19 09:29:59 UTC
Description of problem:

lvm2-monitor.service fails if there are any exported volume groups.


Version-Release number of selected component (if applicable):

systemd-sysv-219-57.el7.x86_64
systemd-libs-219-57.el7.x86_64
systemd-219-57.el7.x86_64
lvm2-2.02.177-4.el7.x86_64
lvm2-libs-2.02.177-4.el7.x86_64

How reproducible:

1.Create Physical volume,Volume Group,Logical volume and export volume group 

~~~~~~

# pvcreate /dev/sdc
  Physical volume "/dev/sdc" successfully created.

# pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  rhel lvm2 a--  <19.00g    0 
  /dev/sdc        lvm2 ---    1.00g 1.00g

# vgcreate vgtest /dev/sdc
  Volume group "vgtest" successfully created

# lvcreate -l 100%FREE -n lvtest vgtest
  Logical volume "lvtest" created.

# lvs
  LV     VG     Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root   rhel   -wi-ao----  <17.00g                                                    
  swap   rhel   -wi-ao----    2.00g                                                    
  lvtest vgtest -wi-a----- 1020.00m                                                    


# lvchange -an /dev/vgtest/lvtest 

# vgexport vgtest
  Volume group "vgtest" successfully exported

# vgs
  VG     #PV #LV #SN Attr   VSize    VFree
  rhel     1   2   0 wz--n-  <19.00g    0 
  vgtest   1   1   0 wzx-n- 1020.00m    0 

# pvs
  PV         VG     Fmt  Attr PSize    PFree
  /dev/sda2  rhel   lvm2 a--   <19.00g    0 
  /dev/sdc   vgtest lvm2 ax-  1020.00m    0 

~~~~~~~~~~~~~

2. Try to stop lvm2-monitor service,restart and start 

# systemctl stop lvm2-monitor

# systemctl start lvm2-monitor
Job for lvm2-monitor.service failed because the control process exited with error code. See "systemctl status lvm2-monitor.service" and "journalctl -xe" for details.

# systemctl status lvm2-monitor
● lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
   Loaded: loaded (/usr/lib/systemd/system/lvm2-monitor.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Thu 2018-04-19 14:57:26 IST; 8s ago
     Docs: man:dmeventd(8)
           man:lvcreate(8)
           man:lvchange(8)
           man:vgchange(8)
  Process: 1346 ExecStart=/usr/sbin/lvm vgchange --monitor y --ignoreskippedcluster (code=exited, status=5)
 Main PID: 1346 (code=exited, status=5)

Apr 19 14:57:26 rhel7-5 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Apr 19 14:57:26 rhel7-5 lvm[1346]: 2 logical volume(s) in volume group "rhel" monitored
Apr 19 14:57:26 rhel7-5 lvm[1346]: Volume group "vgtest" is exported
Apr 19 14:57:26 rhel7-5 systemd[1]: lvm2-monitor.service: main process exited, code=exited, status=5/NOTINSTALLED
Apr 19 14:57:26 rhel7-5 systemd[1]: Failed to start Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Apr 19 14:57:26 rhel7-5 systemd[1]: Unit lvm2-monitor.service entered failed state.
Apr 19 14:57:26 rhel7-5 systemd[1]: lvm2-monitor.service failed.


# systemctl restart lvm2-monitor
Job for lvm2-monitor.service failed because the control process exited with error code. See "systemctl status lvm2-monitor.service" and "journalctl -xe" for details.

# systemctl status lvm2-monitor
● lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
   Loaded: loaded (/usr/lib/systemd/system/lvm2-monitor.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Thu 2018-04-19 10:54:54 IST; 5s ago
     Docs: man:dmeventd(8)
           man:lvcreate(8)
           man:lvchange(8)
           man:vgchange(8)
  Process: 1735 ExecStop=/usr/sbin/lvm vgchange --monitor n --ignoreskippedcluster (code=exited, status=5)
  Process: 1742 ExecStart=/usr/sbin/lvm vgchange --monitor y --ignoreskippedcluster (code=exited, status=5)
 Main PID: 1742 (code=exited, status=5)

Apr 19 10:54:54 rhel7-5 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Apr 19 10:54:54 rhel7-5 lvm[1742]: Volume group "vgtest" is exported
Apr 19 10:54:54 rhel7-5 lvm[1742]: 2 logical volume(s) in volume group "rhel" monitored
Apr 19 10:54:54 rhel7-5 systemd[1]: lvm2-monitor.service: main process exited, code=exited, status=5/NOTINSTALLED
Apr 19 10:54:54 rhel7-5 systemd[1]: Failed to start Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Apr 19 10:54:54 rhel7-5 systemd[1]: Unit lvm2-monitor.service entered failed state.
Apr 19 10:54:54 rhel7-5 systemd[1]: lvm2-monitor.service failed.



Steps to Reproduce:
1.
2.


Actual results:

lvm2-monitor.service getting fail 

Expected results:

lvm2-monitor.service should start without any error messages.
 

Additional info:

Comment 7 Zdenek Kabelac 2019-01-29 16:58:44 UTC
I assume something like adding support for  --ignoreexported  might fly ?

And we may question it's defalt behavior could be  y|n and configurable with lvm.conf.

Comment 8 David Teigland 2019-01-29 17:24:13 UTC
We'll just fix it.  This is simple to do correctly in lvm; we just need to fix it for exported VGs.  I created a new policy several years ago for this, and implemented it for foreign and shared VGs; it's worked perfectly.  That approach is to ignore VGs that cannot be accessed unless that VG is explicitly requested on the command line, in which case it triggers an error message.

Comment 11 David Teigland 2019-06-20 21:08:13 UTC
pushed upstream:
https://sourceware.org/git/?p=lvm2.git;a=commit;h=82b137ef2f7f1b6fc1bbf83918750037835a9568

When there's a rhel7 ack I'll push to the stable branch.

Comment 13 David Teigland 2019-07-31 18:27:29 UTC
pushed to stable-2.02:

https://sourceware.org/git/?p=lvm2.git;a=commit;h=5ddd1ead2dcaf9594a025a15986b8bac573c81b2

[root@null-02 ~]# vgs
  VG           #PV #LV #SN Attr   VSize   VFree
  cc             2   3   0 wzx-n-   1.82t 1.82t
  rhel_null-02   1   3   0 wz--n- 465.27g    0 
[root@null-02 ~]# systemctl start lvm2-monitor
[root@null-02 ~]# systemctl status lvm2-monitor
● lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
   Loaded: loaded (/usr/lib/systemd/system/lvm2-monitor.service; enabled; vendor preset: enabled)
   Active: active (exited) since Wed 2019-07-31 07:04:15 CDT; 4s ago
     Docs: man:dmeventd(8)
           man:lvcreate(8)
           man:lvchange(8)
           man:vgchange(8)
  Process: 26016 ExecStop=/usr/sbin/lvm vgchange --monitor n --ignoreskippedcluster (code=exited, status=0/SUCCESS)
  Process: 26039 ExecStart=/usr/sbin/lvm vgchange --monitor y --ignoreskippedcluster (code=exited, status=0/SUCCESS)
 Main PID: 26039 (code=exited, status=0/SUCCESS)

Jul 31 07:04:14 null-02 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jul 31 07:04:15 null-02 lvm[26039]: 3 logical volume(s) in volume group "rhel_null-02" monitored
Jul 31 07:04:15 null-02 systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
[root@null-02 ~]# vgchange --monitor n --ignoreskippedcluster cc
[root@null-02 ~]# vgchange --monitor y --ignoreskippedcluster cc

Comment 15 Corey Marthaler 2019-11-11 22:47:56 UTC
Fix verified in the latest rpms.

3.10.0-1109.el7.x86_64

lvm2-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
lvm2-libs-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
lvm2-cluster-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
lvm2-lockd-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-1.02.164-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-libs-1.02.164-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-event-1.02.164-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-event-libs-1.02.164-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-persistent-data-0.8.5-1.el7    BUILT: Mon Jun 10 03:58:20 CDT 2019


[root@hayes-01 ~]# lvcreate -l 100%FREE -n lvtest snapper_thinp
  Logical volume "lvtest" created.
[root@hayes-01 ~]# lvs -a -o +devices
  LV     VG            Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices     
  lvtest snapper_thinp -wi-a----- 9.09t                                                     /dev/sde1(0)
  lvtest snapper_thinp -wi-a----- 9.09t                                                     /dev/sdg1(0)
  lvtest snapper_thinp -wi-a----- 9.09t                                                     /dev/sdh1(0)
  lvtest snapper_thinp -wi-a----- 9.09t                                                     /dev/sdb1(0)
  lvtest snapper_thinp -wi-a----- 9.09t                                                     /dev/sdj1(0)
[root@hayes-01 ~]# vgs
  VG            #PV #LV #SN Attr   VSize VFree
  snapper_thinp   5   1   0 wz--n- 9.09t    0 


[root@hayes-01 ~]# lvchange -an /dev/snapper_thinp/lvtest
[root@hayes-01 ~]# vgexport snapper_thinp
  Volume group "snapper_thinp" successfully exported


[root@hayes-01 ~]# lvs -a -o +devices
  Volume group snapper_thinp is exported
[root@hayes-01 ~]# vgs
  VG            #PV #LV #SN Attr   VSize VFree
  snapper_thinp   5   1   0 wzx-n- 9.09t    0 
[root@hayes-01 ~]# pvs
  PV         VG            Fmt  Attr PSize  PFree
  /dev/sdb1  snapper_thinp lvm2 ax-  <1.82t    0 
  /dev/sde1  snapper_thinp lvm2 ax-  <1.82t    0 
  /dev/sdg1  snapper_thinp lvm2 ax-  <1.82t    0 
  /dev/sdh1  snapper_thinp lvm2 ax-  <1.82t    0 
  /dev/sdj1  snapper_thinp lvm2 ax-  <1.82t    0 

[root@hayes-01 ~]# systemctl start lvm2-monitor
[root@hayes-01 ~]# systemctl status lvm2-monitor
 lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
   Loaded: loaded (/usr/lib/systemd/system/lvm2-monitor.service; enabled; vendor preset: enabled)
   Active: active (exited) since Mon 2019-11-11 14:51:16 CST; 1h 49min ago
     Docs: man:dmeventd(8)
           man:lvcreate(8)
           man:lvchange(8)
           man:vgchange(8)
 Main PID: 1112 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/lvm2-monitor.service

Nov 11 14:51:16 hayes-01.lab.msp.redhat.com systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 11 14:51:16 hayes-01.lab.msp.redhat.com systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.

Comment 16 Nitin Yewale 2020-01-07 10:54:07 UTC
Customer in SFDC #02552921 was looking for workaround as the service was failing to start at boot due to exported VGs.

Tested below Workaround - Ignore scanning of devices that have VGs exported.

# systemctl stop lvm2-monitor

Export the vg 
#vgexport vg1

Check the `lvm2-monitor` status

# systemctl status lvm2-monitor
● lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
   Loaded: loaded (/usr/lib/systemd/system/lvm2-monitor.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Tue 2020-01-07 16:07:19 IST; 1min 44s ago
     Docs: man:dmeventd(8)
           man:lvcreate(8)
           man:lvchange(8)
           man:vgchange(8)
  Process: 1760 ExecStop=/usr/sbin/lvm vgchange --monitor n --ignoreskippedcluster (code=exited, status=5)
  Process: 1763 ExecStart=/usr/sbin/lvm vgchange --monitor y --ignoreskippedcluster (code=exited, status=5)
 Main PID: 1763 (code=exited, status=5)

Jan 07 16:07:19 stest systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 07 16:07:19 stest lvm[1763]: Volume group "vg1" is exported
Jan 07 16:07:19 stest lvm[1763]: 2 logical volume(s) in volume group "rhel_vm252-140" monitored
Jan 07 16:07:19 stest systemd[1]: lvm2-monitor.service: main process exited, code=exited, status=5/NOTINSTALLED
Jan 07 16:07:19 stest systemd[1]: Failed to start Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 07 16:07:19 stest systemd[1]: Unit lvm2-monitor.service entered failed state.
Jan 07 16:07:19 stest systemd[1]: lvm2-monitor.service failed.


[root@stest ~]# systemctl restart lvm2-monitor
Job for lvm2-monitor.service failed because the control process exited with error code. See "systemctl status lvm2-monitor.service" and "journalctl -xe" for details.

Configure LVM filter
--------------------

filter = [ "a|/dev/sda2$|", "r|.*|" ] 

# vgs -a -o +devices
  VG             #PV #LV #SN Attr   VSize   VFree Devices         
  rhel_vm252-140   1   2   0 wz--n- <80.13g 4.00m /dev/sda2(0)    
  rhel_vm252-140   1   2   0 wz--n- <80.13g 4.00m /dev/sda2(20000)


Start the lvm2-monitor service and check the status
----------------------------------------------------

# systemctl start lvm2-monitor
# systemctl status lvm2-monitor
● lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
   Loaded: loaded (/usr/lib/systemd/system/lvm2-monitor.service; enabled; vendor preset: enabled)
   Active: active (exited) since Tue 2020-01-07 16:11:35 IST; 10s ago
     Docs: man:dmeventd(8)
           man:lvcreate(8)
           man:lvchange(8)
           man:vgchange(8)
  Process: 1760 ExecStop=/usr/sbin/lvm vgchange --monitor n --ignoreskippedcluster (code=exited, status=5)
  Process: 1987 ExecStart=/usr/sbin/lvm vgchange --monitor y --ignoreskippedcluster (code=exited, status=0/SUCCESS)
 Main PID: 1987 (code=exited, status=0/SUCCESS)

Jan 07 16:11:35 stest systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 07 16:11:35 stest lvm[1987]: 2 logical volume(s) in volume group "rhel_vm252-140" monitored
Jan 07 16:11:35 stest systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.

[root@stest ~]# vgs
  VG             #PV #LV #SN Attr   VSize   VFree
  rhel_vm252-140   1   2   0 wz--n- <80.13g 4.00m


Rebuild initramfs and reboot
-----------------------------

After reboot

# systemctl status lvm2-monitor
● lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
   Loaded: loaded (/usr/lib/systemd/system/lvm2-monitor.service; enabled; vendor preset: enabled)
   Active: active (exited) since Tue 2020-01-07 16:21:33 IST; 39s ago
     Docs: man:dmeventd(8)
           man:lvcreate(8)
           man:lvchange(8)
           man:vgchange(8)
  Process: 621 ExecStart=/usr/sbin/lvm vgchange --monitor y --ignoreskippedcluster (code=exited, status=0/SUCCESS)
 Main PID: 621 (code=exited, status=0/SUCCESS)
    Tasks: 0
   CGroup: /system.slice/lvm2-monitor.service

Jan 07 16:21:33 stest systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 07 16:21:33 stest lvm[621]: 2 logical volume(s) in volume group "rhel_vm252-140" monitored
Jan 07 16:21:33 stest systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
# uptime
 16:22:25 up 1 min,  1 user,  load average: 0.94, 0.32, 0.11


# vgs
  VG             #PV #LV #SN Attr   VSize   VFree
  rhel_vm252-140   1   2   0 wz--n- <80.13g 4.00m

Comment 17 Nitin Yewale 2020-01-07 11:11:50 UTC
After changing the filter to default

# vgs -a -o +devices
  Volume group vg1 is exported
  VG             #PV #LV #SN Attr   VSize   VFree Devices         
  rhel_vm252-140   1   2   0 wz--n- <80.13g 4.00m /dev/sda2(0)    
  rhel_vm252-140   1   2   0 wz--n- <80.13g 4.00m /dev/sda2(20000)

[root@stest ~]# vgs
  VG             #PV #LV #SN Attr   VSize   VFree 
  rhel_vm252-140   1   2   0 wz--n- <80.13g  4.00m
  vg1              1   0   0 wzx-n-  <3.00g <3.00g

# pvs
  PV         VG             Fmt  Attr PSize   PFree 
  /dev/sda2  rhel_vm252-140 lvm2 a--  <80.13g  4.00m
  /dev/sda3                 lvm2 ---    5.00g  5.00g
  /dev/sdb1  vg1            lvm2 ax-   <3.00g <3.00g

Comment 19 errata-xmlrpc 2020-03-31 20:04:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1129


Note You need to log in before you can comment on or make changes to this bug.