RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 612862 - clvmd does not clean dlm lockspace on -S restart
Summary: clvmd does not clean dlm lockspace on -S restart
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.0
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Milan Broz
QA Contact: Corey Marthaler
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-07-09 09:17 UTC by Fabio Massimo Di Nitto
Modified: 2014-08-11 07:07 UTC (History)
12 users (show)

Fixed In Version: lvm2-2.02.82-1.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-05-19 14:26:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2011:0772 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2011-05-18 18:08:31 UTC

Description Fabio Massimo Di Nitto 2010-07-09 09:17:54 UTC
Version-Release number of selected component (if applicable):

lvm2-cluster-2.02.69-1.el6.x86_64

How reproducible:

always

Steps to Reproduce:
1. service cman start
2. service clvmd start
3. clvmd -S
4. service clvmd stop
5. service cman stop
  
Actual results:

[root@rhel6-node2 ~]# /etc/init.d/cman stop
Stopping cluster: 
   Leaving fence domain... found dlm lockspace /sys/kernel/dlm/clvmd
fence_tool: cannot leave due to active systems
                                                           [FAILED]

Expected results:

clvmd dlm lockspace should be cleared. It´s not possible to clear it even via cmd line.

dlm_tool leave clvmd reports success, but the lockspace seems to be unremovable.

Comment 1 Fabio Massimo Di Nitto 2010-07-09 09:20:02 UTC
small correction: repeating dlm_tool leave N times, will eventually remove the lockspace. Probably the counter is increased each time clvmd -S is executed.

Comment 3 RHEL Program Management 2010-07-15 14:32:37 UTC
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release. It has
been denied for the current Red Hat Enterprise Linux release.

** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **

Comment 4 Milan Broz 2010-11-16 14:13:48 UTC
This need some investigation but clvmd -S should not keep any additional references to resources.

Comment 5 Milan Broz 2011-01-18 17:34:48 UTC
Patch sent to list
https://www.redhat.com/archives/lvm-devel/2011-January/msg00130.html

Comment 6 Milan Broz 2011-01-19 23:13:06 UTC
Fix in upstream lvm 2.02.82.

Comment 8 Corey Marthaler 2011-04-06 16:21:28 UTC
Fix verified in the latest rpms.

2.6.32-128.el6.x86_64

lvm2-2.02.83-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
lvm2-libs-2.02.83-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
lvm2-cluster-2.02.83-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
udev-147-2.35.el6    BUILT: Wed Mar 30 07:32:05 CDT 2011
device-mapper-1.02.62-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
device-mapper-libs-1.02.62-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
device-mapper-event-1.02.62-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
device-mapper-event-libs-1.02.62-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011
cmirror-2.02.83-3.el6    BUILT: Fri Mar 18 09:31:10 CDT 2011


[root@taft-01 ~]# service cman start
Starting cluster:
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager...                             [  OK  ]
   Global setup...                                         [  OK  ]
   Loading kernel modules...                               [  OK  ]
   Mounting configfs...                                    [  OK  ]
   Starting cman...                                        [  OK  ]
   Waiting for quorum...                                   [  OK  ]
   Starting fenced...                                      [  OK  ]
   Starting dlm_controld...                                [  OK  ]
   Starting gfs_controld...                                [  OK  ]
   Unfencing self...                                       [  OK  ]
   Joining fence domain...                                 [  OK  ]
[root@taft-01 ~]# service clvmd start
Starting clvmd:
Activating VG(s):   4 logical volume(s) in volume group "mirror_sanity" now active
  3 logical volume(s) in volume group "vg_taft01" now active
                                                           [  OK  ]
[root@taft-01 ~]# clvmd -S

[root@taft-01 ~]# service clvmd stop
Deactivating clustered VG(s):   0 logical volume(s) in volume group "mirror_sanity" now active
  clvmd not running on node taft-04
  clvmd not running on node taft-02
                                                           [  OK  ]
Signaling clvmd to exit                                    [  OK  ]
clvmd terminated                                           [  OK  ]

[root@taft-01 ~]# service cman stop
Stopping cluster:
   Leaving fence domain...                                 [  OK  ]
   Stopping gfs_controld...                                [  OK  ]
   Stopping dlm_controld...                                [  OK  ]
   Stopping fenced...                                      [  OK  ]
   Stopping cman...                                        [  OK  ]
   Waiting for corosync to shutdown:                       [  OK  ]
   Unloading kernel modules...                             [  OK  ]
   Unmounting configfs...                                  [  OK  ]

Comment 9 errata-xmlrpc 2011-05-19 14:26:14 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2011-0772.html


Note You need to log in before you can comment on or make changes to this bug.