RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1811725 - snapshot of inactive LV leaves the LV locked
Summary: snapshot of inactive LV leaves the LV locked
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.0
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-09 15:47 UTC by David Teigland
Modified: 2021-09-07 11:50 UTC (History)
10 users (show)

Fixed In Version: lvm2-2.03.09-4.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-04 02:00:25 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:4546 0 None None None 2020-11-04 02:00:42 UTC

Description David Teigland 2020-03-09 15:47:43 UTC
Description of problem:

Problem reported usptream:
https://www.redhat.com/archives/linux-lvm/2020-March/msg00000.html


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 5 Corey Marthaler 2020-08-17 21:08:37 UTC
Fix verified in the latest rpms.

kernel-4.18.0-232.el8    BUILT: Mon Aug 10 02:17:54 CDT 2020
lvm2-2.03.09-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
lvm2-libs-2.03.09-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
lvm2-lockd-2.03.09-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-1.02.171-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-libs-1.02.171-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-event-1.02.171-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-event-libs-1.02.171-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-persistent-data-0.8.5-3.el8    BUILT: Wed Nov 27 07:05:21 CST 2019
sanlock-3.8.2-1.el8    BUILT: Mon Aug 10 12:12:49 CDT 2020
sanlock-lib-3.8.2-1.el8    BUILT: Mon Aug 10 12:12:49 CDT 2020



[root@host-073 ~]# systemctl status sanlock
â sanlock.service - Shared Storage Lease Manager
   Loaded: loaded (/usr/lib/systemd/system/sanlock.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-08-13 18:20:44 CDT; 3 days ago
  Process: 3582 ExecStart=/usr/sbin/sanlock daemon (code=exited, status=0/SUCCESS)
 Main PID: 3591 (sanlock)
    Tasks: 9 (limit: 93971)
   Memory: 90.9M
   CGroup: /system.slice/sanlock.service
           ââ3591 /usr/sbin/sanlock daemon
           ââ3592 /usr/sbin/sanlock daemon

Aug 13 18:20:44 host-073.virt.lab.msp.redhat.com systemd[1]: Starting Shared Storage Lease Manager...
Aug 13 18:20:44 host-073.virt.lab.msp.redhat.com systemd[1]: Started Shared Storage Lease Manager.

[root@host-073 ~]# systemctl status lvmlockd
â lvmlockd.service - LVM lock daemon
   Loaded: loaded (/usr/lib/systemd/system/lvmlockd.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-08-13 18:21:01 CDT; 3 days ago
     Docs: man:lvmlockd(8)
 Main PID: 3612 (lvmlockd)
    Tasks: 5 (limit: 93971)
   Memory: 5.0M
   CGroup: /system.slice/lvmlockd.service
           ââ3612 /usr/sbin/lvmlockd --foreground

Aug 13 18:21:01 host-073.virt.lab.msp.redhat.com systemd[1]: Starting LVM lock daemon...
Aug 13 18:21:01 host-073.virt.lab.msp.redhat.com lvmlockd[3612]: [D] creating /run/lvm/lvmlockd.socket
Aug 13 18:21:01 host-073.virt.lab.msp.redhat.com lvmlockd[3612]: 1597360861 lvmlockd started
Aug 13 18:21:01 host-073.virt.lab.msp.redhat.com systemd[1]: Started LVM lock daemon.
Aug 13 21:16:18 host-073.virt.lab.msp.redhat.com lvmlockd[3612]: 1597371378 S lvm_vdo_2_7084 R SS1RVu-epPQ-iDhX-UqO4-Osn2-wXM1-CPtf6m clear lock persistent
Aug 17 11:09:21 host-073.virt.lab.msp.redhat.com lvmlockd[3612]: 1597680561 S lvm_vdo_1_1668 R 3iLFuj-EDhs-na4c-wC0t-Bl1c-L4QY-sG2faN clear lock persistent


[root@host-073 ~]# vgcreate  --shared test /dev/sdd1 /dev/sdc2 /dev/sdc1
  Logical volume "lvmlock" created.
  Volume group "test" successfully created
  VG test starting sanlock lockspace
  Starting locking.  Waiting until locks are ready...

# Other host
[root@host-083 ~]# vgchange --lock-start test
  VG test starting sanlock lockspace
  Starting locking.  Waiting for sanlock may take 20 sec to 3 min...


[root@host-073 ~]# lvcreate --activate n -L 12M  -n lv test
  WARNING: Logical volume test/lv not zeroed.
  Logical volume "lv" created.
[root@host-073 ~]# lvcreate --activate n -L 12M  -n lv2 test
  WARNING: Logical volume test/lv2 not zeroed.
  Logical volume "lv2" created.

[root@host-073 ~]# lvs -o +lv_uuid
  LV     VG            Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert LV UUID                               
  lv     test          -wi-------  12.00m                                                       RmsfC9-qAoj-P2G8-0sgW-jkdR-iUBN-DhZFPV
  lv2    test          -wi-------  12.00m                                                       4XCm0V-3ZG8-JOZa-vynS-gKb1-mciT-Ahgn8v

[root@host-073 ~]# lvmlockctl -i
VG global lock_type=sanlock V3wQRT-GjNp-Wx3A-fTxg-snpz-SsF1-oaYdvd
LS sanlock lvm_global
LK VG un ver 0
LK GL un ver 6

VG test lock_type=sanlock xOZ1Rd-XgC3-X32T-pF0D-m97S-11Tu-n1YxtS
LS sanlock lvm_test
LK VG un ver 5
LK LV un RmsfC9-qAoj-P2G8-0sgW-jkdR-iUBN-DhZFPV
LK LV un 4XCm0V-3ZG8-JOZa-vynS-gKb1-mciT-Ahgn8v

[root@host-073 ~]# lvcreate -L 12m -s lv2 -n test/lv2-snap
  Logical volume "lv2-snap" created.

[root@host-073 ~]# lvs -o +lv_uuid
  LV       VG            Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert LV UUID                               
  lv       test          -wi-------  12.00m                                                       RmsfC9-qAoj-P2G8-0sgW-jkdR-iUBN-DhZFPV
  lv2      test          owi---s---  12.00m                                                       4XCm0V-3ZG8-JOZa-vynS-gKb1-mciT-Ahgn8v
  lv2-snap test          swi---s---  12.00m        lv2                                            AZmG1a-fgJ4-Havn-eJNq-pX6u-nla5-fumXpq

[root@host-073 ~]# lvmlockctl -i
VG global lock_type=sanlock V3wQRT-GjNp-Wx3A-fTxg-snpz-SsF1-oaYdvd
LS sanlock lvm_global
LK VG un ver 0
LK GL un ver 6

VG test lock_type=sanlock xOZ1Rd-XgC3-X32T-pF0D-m97S-11Tu-n1YxtS
LS sanlock lvm_test
LK VG un ver 7
LK LV un RmsfC9-qAoj-P2G8-0sgW-jkdR-iUBN-DhZFPV
LK LV un 4XCm0V-3ZG8-JOZa-vynS-gKb1-mciT-Ahgn8v

# Other host is able to activate the volume

[root@host-083 ~]# lvchange -a y test/lv2
[root@host-083 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices        
  [lvmlock]       global        -wi-ao---- 256.00m                                                       /dev/sde2(0)   
  lv              test          -wi-------  12.00m                                                       /dev/sde1(64)  
  lv2             test          owi-a-s---  12.00m                                                       /dev/sde1(67)  
  lv2-snap        test          swi-a-s---  12.00m        lv2    0.00                                    /dev/sde1(70)  
  [lvmlock]       test          -wi-ao---- 256.00m                                                       /dev/sde1(0)   

[root@host-083 ~]#  lvmlockctl -i
VG global lock_type=sanlock V3wQRT-GjNp-Wx3A-fTxg-snpz-SsF1-oaYdvd
LS sanlock lvm_global
LK VG un ver 0
LK GL un ver 6

VG test lock_type=sanlock xOZ1Rd-XgC3-X32T-pF0D-m97S-11Tu-n1YxtS
LS sanlock lvm_test
LK VG un ver 7
LK LV ex 4XCm0V-3ZG8-JOZa-vynS-gKb1-mciT-Ahgn8v

Comment 8 errata-xmlrpc 2020-11-04 02:00:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4546


Note You need to log in before you can comment on or make changes to this bug.