RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1868833 - RFE: have cluster lvm cmds mention the config file when lvmlockd isnt set properly
Summary: RFE: have cluster lvm cmds mention the config file when lvmlockd isnt set pro...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.3
Hardware: x86_64
OS: Linux
low
low
Target Milestone: rc
: 8.0
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-08-13 23:12 UTC by Corey Marthaler
Modified: 2022-05-10 16:37 UTC (History)
7 users (show)

Fixed In Version: lvm2-2.03.14-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-10 15:21:57 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker CLUSTERQE-5149 0 None None None 2021-12-14 00:06:50 UTC
Red Hat Product Errata RHBA-2022:2038 0 None None None 2022-05-10 15:22:23 UTC

Description Corey Marthaler 2020-08-13 23:12:31 UTC
Description of problem:
After upgrading lvm rpms and copying over the new lvm.conf file, it overrode the "use_lvmlockd" enabled config option all the while sanlock and lvmlockd were running properly. It took me awhile to figure out that was the issue once lvm create cmds starting to fail.


[root@host-083 ~]# systemctl status lvmlockd
â— lvmlockd.service - LVM lock daemon
   Loaded: loaded (/usr/lib/systemd/system/lvmlockd.service; disabled; vendor preset: disabled)
   Active: active (running) (thawing) since Thu 2020-08-13 16:58:45 CDT; 1min 13s ago
     Docs: man:lvmlockd(8)
 Main PID: 26262 (lvmlockd)
    Tasks: 3 (limit: 93971)
   Memory: 2.4M
   CGroup: /system.slice/lvmlockd.service
           └─26262 /usr/sbin/lvmlockd --foreground
 
Aug 13 16:58:45 host-083.virt.lab.msp.redhat.com systemd[1]: Starting LVM lock daemon...
Aug 13 16:58:45 host-083.virt.lab.msp.redhat.com lvmlockd[26262]: [D] creating /run/lvm/lvmlockd.socket
Aug 13 16:58:45 host-083.virt.lab.msp.redhat.com lvmlockd[26262]: 1597355925 lvmlockd started
Aug 13 16:58:45 host-083.virt.lab.msp.redhat.com systemd[1]: Started LVM lock daemon.
 
 
[root@host-083 ~]# systemctl status sanlock
â— sanlock.service - Shared Storage Lease Manager
   Loaded: loaded (/usr/lib/systemd/system/sanlock.service; disabled; vendor preset: disabled)
   Active: active (running) (thawing) since Thu 2020-08-13 16:58:24 CDT; 1min 3s ago
  Process: 26236 ExecStart=/usr/sbin/sanlock daemon (code=exited, status=0/SUCCESS)
 Main PID: 26237 (sanlock)
    Tasks: 6 (limit: 93971)
   Memory: 14.2M
   CGroup: /system.slice/sanlock.service
           ├─26237 /usr/sbin/sanlock daemon
           └─26238 /usr/sbin/sanlock daemon
 
Aug 13 16:58:24 host-083.virt.lab.msp.redhat.com systemd[1]: Starting Shared Storage Lease Manager...
Aug 13 16:58:24 host-083.virt.lab.msp.redhat.com systemd[1]: Started Shared Storage Lease Manager.
Aug 13 16:58:24 host-083.virt.lab.msp.redhat.com sanlock[26237]: 2020-08-13 16:58:24 12371 [26237]: set scheduler RR|RESET_ON_FORK priority 99 failed: Operation not permitted
 

 
[root@host-083 ~]# vgcreate  --shared global /dev/sdd1
  Using a shared lock type requires lvmlockd.
  Run `vgcreate --help' for more information.

[root@host-083 ~]# pvscan
  PV /dev/vda2   VG rhel_host-083   lvm2 [<7.00 GiB / 1.40 GiB free]
  Total: 1 [<7.00 GiB] / in use: 1 [<7.00 GiB] / in no VG: 0 [0   ]


Version-Release number of selected component (if applicable):
kernel-4.18.0-232.el8    BUILT: Mon Aug 10 02:17:54 CDT 2020
lvm2-2.03.09-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
lvm2-libs-2.03.09-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
lvm2-dbusd-2.03.09-5.el8    BUILT: Wed Aug 12 15:49:44 CDT 2020
lvm2-lockd-2.03.09-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-1.02.171-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-libs-1.02.171-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-event-1.02.171-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-event-libs-1.02.171-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
sanlock-3.8.1-1.el8    BUILT: Thu Jul  9 14:02:05 CDT 2020
sanlock-lib-3.8.1-1.el8    BUILT: Thu Jul  9 14:02:05 CDT 2020


How reproducible:
Everytime

Comment 4 Nate Straz 2021-09-24 20:09:51 UTC
I ran into this while testing on RHEL9 Beta.  I had the cluster up with lvmlockd running in the locking resource group and vgcreate failed.

[root@host-002 ~]# pcs status --full
Cluster name: STSRHTS26331
Cluster Summary:
  * Stack: corosync
  * Current DC: host-002 (1) (version 2.1.0-11.el9-7c3f660707) - partition with quorum
  * Last updated: Fri Sep 24 15:09:16 2021
  * Last change:  Fri Sep 24 13:17:57 2021 by root via cibadmin on host-002
  * 5 nodes configured
  * 15 resource instances configured

Node List:
  * Online: [ host-002 (1) host-003 (2) host-004 (3) host-005 (4) host-006 (5) ]

Full List of Resources:
  * fence-host-002	(stonith:fence_xvm):	Started host-002
  * fence-host-003	(stonith:fence_xvm):	Started host-003
  * fence-host-004	(stonith:fence_xvm):	Started host-004
  * fence-host-005	(stonith:fence_xvm):	Started host-005
  * fence-host-006	(stonith:fence_xvm):	Started host-006
  * Clone Set: locking-clone [locking]:
    * Resource Group: locking:0:
      * dlm	(ocf:pacemaker:controld):	Started host-004
      * lvmlockd	(ocf:heartbeat:lvmlockd):	Started host-004
    * Resource Group: locking:1:
      * dlm	(ocf:pacemaker:controld):	Started host-005
      * lvmlockd	(ocf:heartbeat:lvmlockd):	Started host-005
    * Resource Group: locking:2:
      * dlm	(ocf:pacemaker:controld):	Started host-006
      * lvmlockd	(ocf:heartbeat:lvmlockd):	Started host-006
    * Resource Group: locking:3:
      * dlm	(ocf:pacemaker:controld):	Started host-002
      * lvmlockd	(ocf:heartbeat:lvmlockd):	Started host-002
    * Resource Group: locking:4:
      * dlm	(ocf:pacemaker:controld):	Started host-003
      * lvmlockd	(ocf:heartbeat:lvmlockd):	Started host-003

Migration Summary:

Tickets:

PCSD Status:
  host-002: Online
  host-003: Online
  host-004: Online
  host-005: Online
  host-006: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@host-002 ~]# vgcreate --config devices/scan_lvs=1  --shared STSRHTS26331 /dev/sda1  Using a shared lock type requires lvmlockd.
  Run `vgcreate --help' for more information.

Comment 5 David Teigland 2021-09-24 20:43:13 UTC
https://sourceware.org/git/?p=lvm2.git;a=commit;h=e62a71f3dd97795ea64b2f3948dd8629c2dac8b8

log_error("Using a shared lock type requires lvmlockd (lvm.conf use_lvmlockd.)");

Comment 7 Corey Marthaler 2021-11-03 16:06:16 UTC
Marking Verified:Tested in the latest rpms

kernel-4.18.0-348.4.el8    BUILT: Mon Oct 25 14:44:48 CDT 2021
lvm2-2.03.14-1.el8    BUILT: Wed Oct 20 10:18:17 CDT 2021
lvm2-libs-2.03.14-1.el8    BUILT: Wed Oct 20 10:18:17 CDT 2021
lvm2-dbusd-2.03.14-1.el8    BUILT: Wed Oct 20 10:18:48 CDT 2021


[root@hayes-03 ~]#  systemctl status lvmlockd
â— lvmlockd.service - LVM lock daemon
   Loaded: loaded (/usr/lib/systemd/system/lvmlockd.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2021-11-03 11:00:44 CDT; 1min 57s ago
     Docs: man:lvmlockd(8)
 Main PID: 98930 (lvmlockd)
    Tasks: 3 (limit: 1647315)
   Memory: 2.8M
   CGroup: /system.slice/lvmlockd.service
           └─98930 /usr/sbin/lvmlockd --foreground

Nov 03 11:00:44 hayes-03.lab.msp.redhat.com systemd[1]: Starting LVM lock daemon...
Nov 03 11:00:44 hayes-03.lab.msp.redhat.com lvmlockd[98930]: [D] creating /run/lvm/lvmlockd.socket
Nov 03 11:00:44 hayes-03.lab.msp.redhat.com lvmlockd[98930]: 1635955244 lvmlockd started
Nov 03 11:00:44 hayes-03.lab.msp.redhat.com systemd[1]: Started LVM lock daemon.

[root@hayes-03 ~]# grep use_lvmlockd /etc/lvm/lvm.conf
        # Configuration option global/use_lvmlockd.
        use_lvmlockd = 0

[root@hayes-03 ~]# vgcreate  --shared global /dev/sdd1
  Using a shared lock type requires lvmlockd (lvm.conf use_lvmlockd.)
  Run `vgcreate --help' for more information.

Comment 11 Corey Marthaler 2021-11-17 00:32:44 UTC
Marking Verified in the latest rpms.

kernel-4.18.0-348.4.el8.kpq0    BUILT: Wed Oct 27 15:00:32 CDT 2021
lvm2-2.03.14-1.el8    BUILT: Wed Oct 20 10:18:17 CDT 2021
lvm2-libs-2.03.14-1.el8    BUILT: Wed Oct 20 10:18:17 CDT 2021
lvm2-lockd-2.03.14-1.el8    BUILT: Wed Oct 20 10:18:17 CDT 2021


[root@hayes-02 ~]# grep use_lvmlockd /etc/lvm/lvm.conf
        # Configuration option global/use_lvmlockd.
        # use_lvmlockd = 0

[root@hayes-02 ~]# vgcreate  --shared global /dev/sdd1
  Using a shared lock type requires lvmlockd (lvm.conf use_lvmlockd.)
  Run `vgcreate --help' for more information.

Comment 13 errata-xmlrpc 2022-05-10 15:21:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:2038


Note You need to log in before you can comment on or make changes to this bug.