RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 853180 - Cluster doesn't start if GFS2 is mounted as "lock_nolock"
Summary: Cluster doesn't start if GFS2 is mounted as "lock_nolock"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: cluster
Version: 6.4
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: rc
: ---
Assignee: David Teigland
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-08-30 16:43 UTC by Robert Peterson
Modified: 2013-02-21 07:42 UTC (History)
8 users (show)

Fixed In Version: cluster-3.0.12.1-37.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-02-21 07:42:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Proposed and tested patch (1.51 KB, patch)
2012-08-30 16:45 UTC, Robert Peterson
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0287 0 normal SHIPPED_LIVE cluster and gfs2-utils bug fix and enhancement update 2013-02-20 20:36:42 UTC

Description Robert Peterson 2012-08-30 16:43:29 UTC
Description of problem:
If you have a GFS2 file system mounted as "lock_nolock" you
can't start the cluster software.

Version-Release number of selected component (if applicable):
6.3

How reproducible:
Always

Steps to Reproduce:
1.mkfs.gfs2 -O -j1 -p lock_nolock -t intec_cluster:sas /dev/sasdrives/scratch &> /dev/null 
2.mount -tgfs2 /dev/sasdrives/scratch /mnt/gfs2
3.service cman start
  
Actual results:
[root@intec2 ../bob/cluster.git/fence]# service cman start
Starting cluster: 
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager...                             [  OK  ]
   Global setup...                                         [  OK  ]
   Loading kernel modules...                               [  OK  ]
   Mounting configfs...                                    [  OK  ]
   Starting cman...                                        [  OK  ]
   Waiting for quorum...                                   [  OK  ]
   Starting fenced...                                      [  OK  ]
   Starting dlm_controld...                                [  OK  ]
   Tuning DLM kernel hash tables...                        [  OK  ]
   Starting gfs_controld...                                [  OK  ]
   Unfencing self...                                       [  OK  ]
   Joining fence domain... fence_tool: fenced not running, no lockfile
                                                           [FAILED]
Stopping cluster: 
   Leaving fence domain...                                 [  OK  ]
   Stopping gfs_controld...                                [  OK  ]
   Stopping dlm_controld... ^[[A
                                                           [FAILED]
[root@intec2 ../bob/cluster.git/fence]# 

Expected results:
[root@intec2 ../group/gfs_controld]# service cman start
Starting cluster: 
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager...                             [  OK  ]
   Global setup...                                         [  OK  ]
   Loading kernel modules...                               [  OK  ]
   Mounting configfs...                                    [  OK  ]
   Starting cman...                                        [  OK  ]
   Waiting for quorum...                                   [  OK  ]
   Starting fenced...                                      [  OK  ]
   Starting dlm_controld...                                [  OK  ]
   Tuning DLM kernel hash tables...                        [  OK  ]
   Starting gfs_controld...                                [  OK  ]
   Unfencing self...                                       [  OK  ]
   Joining fence domain...                                 [  OK  ]
[root@intec2 ../group/gfs_controld]# 

Additional info:
I have a working patch

Comment 1 Robert Peterson 2012-08-30 16:45:14 UTC
Created attachment 608257 [details]
Proposed and tested patch

Comment 3 David Teigland 2012-08-30 16:56:51 UTC
pushed to cluster.git RHEL6 branch
http://git.fedorahosted.org/cgit/cluster.git/commit/?h=RHEL6&id=6b7602b0f65268e2f09c87a314cda3947d839b35

Comment 6 Justin Payne 2012-11-01 20:04:02 UTC
Verified in cman-3.0.12.1-45

[root@dash-01 ~]# rpm -q cman
cman-3.0.12.1-32.el6.x86_64
[root@dash-01 ~]# mkfs.gfs2 -O -j1 -p lock_nolock -t dash:gfs2 /dev/sdb1 &> /dev/null
[root@dash-01 ~]# mount -t gfs2 /dev/sdb1 /mnt/gfs2/
[root@dash-01 ~]# service cman start
Starting cluster:
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager...                             [  OK  ]
   Global setup...                                         [  OK  ]
   Loading kernel modules...                               [  OK  ]
   Mounting configfs...                                    [  OK  ]
   Starting cman...                                        [  OK  ]
   Waiting for quorum...                                   [  OK  ]
   Starting fenced...                                      [  OK  ]
   Starting dlm_controld...                                [  OK  ]
   Starting gfs_controld...                                [  OK  ]
   Unfencing self... fence_node: cannot connect to cman   
                                                           [FAILED]
Stopping cluster:
   Leaving fence domain...                                 [  OK  ]
   Stopping gfs_controld...                                [  OK  ]
   Stopping dlm_controld...                                [  OK  ]
   Stopping fenced...                                      [  OK  ]
   Stopping cman...                                        [  OK  ]
   Unloading kernel modules...                             [  OK  ]
   Unmounting configfs...                                  [  OK  ]

[root@dash-01 ~]# rpm -q cman
cman-3.0.12.1-45.el6.x86_64
[root@dash-01 ~]# mount -t gfs2 /dev/sdb1 /mnt/gfs2/
[root@dash-01 ~]# service cman start
Starting cluster: 
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager...                             [  OK  ]
   Global setup...                                         [  OK  ]
   Loading kernel modules...                               [  OK  ]
   Mounting configfs...                                    [  OK  ]
   Starting cman...                                        [  OK  ]
   Waiting for quorum...                                   [  OK  ]
   Starting fenced...                                      [  OK  ]
   Starting dlm_controld...                                [  OK  ]
   Tuning DLM kernel config...                             [  OK  ]
   Starting gfs_controld...                                [  OK  ]
   Unfencing self...                                       [  OK  ]
   Joining fence domain...                                 [  OK  ]

Comment 8 errata-xmlrpc 2013-02-21 07:42:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0287.html


Note You need to log in before you can comment on or make changes to this bug.