RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1489986 - invalid lvmlockd state where --lock-stop fails "LVs must first be deactivated" even w/o any LVs present
Summary: invalid lvmlockd state where --lock-stop fails "LVs must first be deactivated...
Keywords:
Status: CLOSED DUPLICATE of bug 1467975
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.4
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-09-08 22:21 UTC by Corey Marthaler
Modified: 2021-09-03 12:37 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-09-11 14:57:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
lvmlockctl --dump from failing node harding-02 (1.00 MB, text/plain)
2017-09-08 22:24 UTC, Corey Marthaler
no flags Details
lvmlockctl --dump from other node harding-03 (1.00 MB, text/plain)
2017-09-08 22:26 UTC, Corey Marthaler
no flags Details

Description Corey Marthaler 2017-09-08 22:21:56 UTC
Description of problem:
I need to debug this a bit more and see if it's reliably reproducible.

# Test output                                                                                                                                                                          
SCENARIO - [looping_mirror_to_linear_converts]                                                                                                                                               
Create a mirror and then down and up convert it 20 times                                                                                                                                                 
harding-02: lvcreate --activate ey --type mirror -m 1 -n mirror_2_linear -L 300M --nosync mirror_sanity                                                                                                  
  WARNING: New mirror won't be synchronised. Don't read what you didn't write!                                                                                                                                    
1: down convert to linear on harding-02; up convert to mirror on harding-02                                                                                                                                                
2: down convert to linear on harding-02; up convert to mirror on harding-02
^C


# Cleanup script
8 disk(s) to be used:
        harding-02=/dev/mapper/mpathh /dev/mapper/mpathg /dev/mapper/mpathf /dev/mapper/mpathe /dev/mapper/mpathd /dev/mapper/mpathc /dev/mapper/mpathb /dev/mapper/mpatha
        harding-03=/dev/mapper/mpathh /dev/mapper/mpathg /dev/mapper/mpathf /dev/mapper/mpathe /dev/mapper/mpathd /dev/mapper/mpathc /dev/mapper/mpathb /dev/mapper/mpatha
deactivating shared LV mirror_sanity/mirror_2_linear on: harding-03 harding-02 
removing shared LV mirror_sanity/mirror_2_linear on harding-02
removing VG mirror_sanity on harding-03
harding-02: vgchange --lock-stop  mirror_sanity
  VG mirror_sanity stop failed: LVs must first be deactivated
unable to stop lock space for mirror_sanity on harding-02


# no lvs exist currently, and no left over dm devices are present either on any node in cluster
[root@harding-02 ~]# lvs -a -o +devices
  LV   VG              Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices         
  home rhel_harding-02 -wi-ao---- <200.52g                                                     /dev/sdb1(0)    
  home rhel_harding-02 -wi-ao---- <200.52g                                                     /dev/sdc1(0)    
  home rhel_harding-02 -wi-ao---- <200.52g                                                     /dev/sda2(7155) 
  root rhel_harding-02 -wi-ao----   50.00g                                                     /dev/sda2(10792)
  swap rhel_harding-02 -wi-ao----  <27.95g                                                     /dev/sda2(0)    

[root@harding-02 ~]# vgs
  VG              #PV #LV #SN Attr   VSize    VFree
  mirror_sanity    10   0   0 wz--ns    1.22t 1.22t
  rhel_harding-02   3   3   0 wz--n- <278.47g    0 

[root@harding-02 ~]# vgchange --lock-stop  mirror_sanity
  VG mirror_sanity stop failed: LVs must first be deactivated

[root@harding-02 ~]# vgremove  mirror_sanity
  Volume group "mirror_sanity" successfully removed


Version-Release number of selected component (if applicable):
3.10.0-693.el7.x86_64

lvm2-2.02.171-8.el7    BUILT: Wed Jun 28 13:28:58 CDT 2017
lvm2-libs-2.02.171-8.el7    BUILT: Wed Jun 28 13:28:58 CDT 2017
lvm2-cluster-2.02.171-8.el7    BUILT: Wed Jun 28 13:28:58 CDT 2017
device-mapper-1.02.140-8.el7    BUILT: Wed Jun 28 13:28:58 CDT 2017
device-mapper-libs-1.02.140-8.el7    BUILT: Wed Jun 28 13:28:58 CDT 2017
device-mapper-event-1.02.140-8.el7    BUILT: Wed Jun 28 13:28:58 CDT 2017
device-mapper-event-libs-1.02.140-8.el7    BUILT: Wed Jun 28 13:28:58 CDT 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 10:15:46 CDT 2017
cmirror-2.02.171-8.el7    BUILT: Wed Jun 28 13:28:58 CDT 2017
sanlock-3.5.0-1.el7    BUILT: Wed Apr 26 09:37:30 CDT 2017
sanlock-lib-3.5.0-1.el7    BUILT: Wed Apr 26 09:37:30 CDT 2017
lvm2-lockd-2.02.171-8.el7    BUILT: Wed Jun 28 13:28:58 CDT 2017

Comment 2 Corey Marthaler 2017-09-08 22:24:30 UTC
Created attachment 1323947 [details]
lvmlockctl --dump from failing node harding-02

Comment 3 Corey Marthaler 2017-09-08 22:26:21 UTC
Created attachment 1323948 [details]
lvmlockctl --dump from other node harding-03

Comment 4 Corey Marthaler 2017-09-11 14:57:26 UTC
Assuming this is bug 1467975 for now.

*** This bug has been marked as a duplicate of bug 1467975 ***


Note You need to log in before you can comment on or make changes to this bug.