RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1259958 - removing a vg seems to require a lock stop and then a start
Summary: removing a vg seems to require a lock stop and then a start
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-09-03 23:37 UTC by Corey Marthaler
Modified: 2021-09-03 12:40 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-09-15 15:10:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2015-09-03 23:37:42 UTC
Description of problem:
I'm still new to lvmlockd so it's possible I set something up wrong here.

# all LVs removed
[root@harding-03 ~]# lvs
  LV   VG              Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home rhel_harding-03 -wi-ao---- 200.97g                                                    
  root rhel_harding-03 -wi-ao----  50.00g                                                    
  swap rhel_harding-03 -wi-ao----  27.95g                                                    

[root@harding-02 ~]# lvs
  LV   VG              Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home rhel_harding-02 -wi-ao---- 200.97g                                                    
  root rhel_harding-02 -wi-ao----  50.00g                                                    
  swap rhel_harding-02 -wi-ao----  27.95g                                                    

[root@harding-03 ~]# vgremove VG5
  Lockspace for "VG5" not stopped on other hosts

[root@harding-02 ~]# vgchange --lock-stop VG5

[root@harding-02 ~]# vgremove VG5
  VG VG5 lock failed: lockspace is inactive

[root@harding-03 ~]# vgremove VG5
  Lockspace for "VG5" not stopped on other hosts

[root@harding-03 ~]# vgchange --lock-stop VG5

[root@harding-03 ~]# vgremove VG5
  VG VG5 lock failed: lockspace is inactive

[root@harding-03 ~]# vgchange --lock-start VG5
  VG VG5 starting sanlock lockspace
  Starting locking.  Waiting until locks are ready...

[root@harding-03 ~]# vgremove VG5
  Volume group "VG5" successfully removed

 


Version-Release number of selected component (if applicable):
3.10.0-306.el7.x86_64

lvm2-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
lvm2-libs-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
lvm2-cluster-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-1.02.106-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-libs-1.02.106-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-event-1.02.106-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-event-libs-1.02.106-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-persistent-data-0.5.5-1.el7    BUILT: Thu Aug 13 09:58:10 CDT 2015
cmirror-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
sanlock-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
sanlock-lib-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
lvm2-lockd-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015

Comment 1 David Teigland 2015-09-04 14:21:49 UTC
The VG needs to be started on the host that is removing it, but it needs to be stopped on all other hosts.


[root@harding-02 ~]# vgremove VG5
  VG VG5 lock failed: lockspace is inactive
(This means that you need to start the VG to remove it.  The message should probably use correct terminology and say the "lockspace is not started".)


[root@harding-03 ~]# vgremove VG5
  Lockspace for "VG5" not stopped on other hosts
(The VG needs to be stopped on *other* hosts before this host can remove it.  Maybe it would be clearer if the message said the VG is started on other hosts rather than saying it's not stopped on other hosts.)


Also, with sanlock, it can take several seconds to notice that another host has stopped the VG.  It may be worth adding an option to retry internally for a while to compensate for that.


Note You need to log in before you can comment on or make changes to this bug.