Bug 612862

Summary: clvmd does not clean dlm lockspace on -S restart
Product: Red Hat Enterprise Linux 6 Reporter: Fabio Massimo Di Nitto <fdinitto>
Component: lvm2Assignee: Milan Broz <mbroz>
Status: CLOSED ERRATA QA Contact: Corey Marthaler <cmarthal>
Severity: medium Docs Contact:
Priority: low    
Version: 6.0CC: agk, amirmemon2099, coughlan, dwysocha, heinzm, jbrassow, joe.thornber, mbroz, prajnoha, prockai, pvrabec, zkabelac
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: lvm2-2.02.82-1.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2011-05-19 14:26:14 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Fabio Massimo Di Nitto 2010-07-09 09:17:54 UTC
Version-Release number of selected component (if applicable):

lvm2-cluster-2.02.69-1.el6.x86_64

How reproducible:

always

Steps to Reproduce:
1. service cman start
2. service clvmd start
3. clvmd -S
4. service clvmd stop
5. service cman stop
  
Actual results:

[root@rhel6-node2 ~]# /etc/init.d/cman stop
Stopping cluster: 
   Leaving fence domain... found dlm lockspace /sys/kernel/dlm/clvmd
fence_tool: cannot leave due to active systems
                                                           [FAILED]

Expected results:

clvmd dlm lockspace should be cleared. It´s not possible to clear it even via cmd line.

dlm_tool leave clvmd reports success, but the lockspace seems to be unremovable.

Comment 1 Fabio Massimo Di Nitto 2010-07-09 09:20:02 UTC
small correction: repeating dlm_tool leave N times, will eventually remove the lockspace. Probably the counter is increased each time clvmd -S is executed.

Comment 3 RHEL Program Management 2010-07-15 14:32:37 UTC
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release. It has
been denied for the current Red Hat Enterprise Linux release.

** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **

Comment 4 Milan Broz 2010-11-16 14:13:48 UTC
This need some investigation but clvmd -S should not keep any additional references to resources.

Comment 5 Milan Broz 2011-01-18 17:34:48 UTC
Patch sent to list
https://www.redhat.com/archives/lvm-devel/2011-January/msg00130.html

Comment 6 Milan Broz 2011-01-19 23:13:06 UTC
Fix in upstream lvm 2.02.82.

Comment 8 Corey Marthaler 2011-04-06 16:21:28 UTC
Fix verified in the latest rpms.

2.6.32-128.el6.x86_64

lvm2-2.02.83-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
lvm2-libs-2.02.83-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
lvm2-cluster-2.02.83-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
udev-147-2.35.el6    BUILT: Wed Mar 30 07:32:05 CDT 2011
device-mapper-1.02.62-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
device-mapper-libs-1.02.62-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
device-mapper-event-1.02.62-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
device-mapper-event-libs-1.02.62-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
cmirror-2.02.83-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011


[root@taft-01 ~]# service cman start
Starting cluster:
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager...                             [  OK  ]
   Global setup...                                         [  OK  ]
   Loading kernel modules...                               [  OK  ]
   Mounting configfs...                                    [  OK  ]
   Starting cman...                                        [  OK  ]
   Waiting for quorum...                                   [  OK  ]
   Starting fenced...                                      [  OK  ]
   Starting dlm_controld...                                [  OK  ]
   Starting gfs_controld...                                [  OK  ]
   Unfencing self...                                       [  OK  ]
   Joining fence domain...                                 [  OK  ]
[root@taft-01 ~]# service clvmd start
Starting clvmd:
Activating VG(s):   4 logical volume(s) in volume group "mirror_sanity" now active
  3 logical volume(s) in volume group "vg_taft01" now active
                                                           [  OK  ]
[root@taft-01 ~]# clvmd -S

[root@taft-01 ~]# service clvmd stop
Deactivating clustered VG(s):   0 logical volume(s) in volume group "mirror_sanity" now active
  clvmd not running on node taft-04
  clvmd not running on node taft-02
                                                           [  OK  ]
Signaling clvmd to exit                                    [  OK  ]
clvmd terminated                                           [  OK  ]

[root@taft-01 ~]# service cman stop
Stopping cluster:
   Leaving fence domain...                                 [  OK  ]
   Stopping gfs_controld...                                [  OK  ]
   Stopping dlm_controld...                                [  OK  ]
   Stopping fenced...                                      [  OK  ]
   Stopping cman...                                        [  OK  ]
   Waiting for corosync to shutdown:                       [  OK  ]
   Unloading kernel modules...                             [  OK  ]
   Unmounting configfs...                                  [  OK  ]

Comment 9 errata-xmlrpc 2011-05-19 14:26:14 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2011-0772.html