This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours
Bug 612862 - clvmd does not clean dlm lockspace on -S restart
clvmd does not clean dlm lockspace on -S restart
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2 (Show other bugs)
6.0
All Linux
low Severity medium
: rc
: ---
Assigned To: Milan Broz
Corey Marthaler
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-07-09 05:17 EDT by Fabio Massimo Di Nitto
Modified: 2014-08-11 03:07 EDT (History)
12 users (show)

See Also:
Fixed In Version: lvm2-2.02.82-1.el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2011-05-19 10:26:14 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Fabio Massimo Di Nitto 2010-07-09 05:17:54 EDT
Version-Release number of selected component (if applicable):

lvm2-cluster-2.02.69-1.el6.x86_64

How reproducible:

always

Steps to Reproduce:
1. service cman start
2. service clvmd start
3. clvmd -S
4. service clvmd stop
5. service cman stop
  
Actual results:

[root@rhel6-node2 ~]# /etc/init.d/cman stop
Stopping cluster: 
   Leaving fence domain... found dlm lockspace /sys/kernel/dlm/clvmd
fence_tool: cannot leave due to active systems
                                                           [FAILED]

Expected results:

clvmd dlm lockspace should be cleared. It´s not possible to clear it even via cmd line.

dlm_tool leave clvmd reports success, but the lockspace seems to be unremovable.
Comment 1 Fabio Massimo Di Nitto 2010-07-09 05:20:02 EDT
small correction: repeating dlm_tool leave N times, will eventually remove the lockspace. Probably the counter is increased each time clvmd -S is executed.
Comment 3 RHEL Product and Program Management 2010-07-15 10:32:37 EDT
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release. It has
been denied for the current Red Hat Enterprise Linux release.

** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **
Comment 4 Milan Broz 2010-11-16 09:13:48 EST
This need some investigation but clvmd -S should not keep any additional references to resources.
Comment 5 Milan Broz 2011-01-18 12:34:48 EST
Patch sent to list
https://www.redhat.com/archives/lvm-devel/2011-January/msg00130.html
Comment 6 Milan Broz 2011-01-19 18:13:06 EST
Fix in upstream lvm 2.02.82.
Comment 8 Corey Marthaler 2011-04-06 12:21:28 EDT
Fix verified in the latest rpms.

2.6.32-128.el6.x86_64

lvm2-2.02.83-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
lvm2-libs-2.02.83-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
lvm2-cluster-2.02.83-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
udev-147-2.35.el6    BUILT: Wed Mar 30 07:32:05 CDT 2011
device-mapper-1.02.62-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
device-mapper-libs-1.02.62-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
device-mapper-event-1.02.62-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
device-mapper-event-libs-1.02.62-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
cmirror-2.02.83-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011


[root@taft-01 ~]# service cman start
Starting cluster:
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager...                             [  OK  ]
   Global setup...                                         [  OK  ]
   Loading kernel modules...                               [  OK  ]
   Mounting configfs...                                    [  OK  ]
   Starting cman...                                        [  OK  ]
   Waiting for quorum...                                   [  OK  ]
   Starting fenced...                                      [  OK  ]
   Starting dlm_controld...                                [  OK  ]
   Starting gfs_controld...                                [  OK  ]
   Unfencing self...                                       [  OK  ]
   Joining fence domain...                                 [  OK  ]
[root@taft-01 ~]# service clvmd start
Starting clvmd:
Activating VG(s):   4 logical volume(s) in volume group "mirror_sanity" now active
  3 logical volume(s) in volume group "vg_taft01" now active
                                                           [  OK  ]
[root@taft-01 ~]# clvmd -S

[root@taft-01 ~]# service clvmd stop
Deactivating clustered VG(s):   0 logical volume(s) in volume group "mirror_sanity" now active
  clvmd not running on node taft-04
  clvmd not running on node taft-02
                                                           [  OK  ]
Signaling clvmd to exit                                    [  OK  ]
clvmd terminated                                           [  OK  ]

[root@taft-01 ~]# service cman stop
Stopping cluster:
   Leaving fence domain...                                 [  OK  ]
   Stopping gfs_controld...                                [  OK  ]
   Stopping dlm_controld...                                [  OK  ]
   Stopping fenced...                                      [  OK  ]
   Stopping cman...                                        [  OK  ]
   Waiting for corosync to shutdown:                       [  OK  ]
   Unloading kernel modules...                             [  OK  ]
   Unmounting configfs...                                  [  OK  ]
Comment 9 errata-xmlrpc 2011-05-19 10:26:14 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2011-0772.html

Note You need to log in before you can comment on or make changes to this bug.