Bug 853180
| Summary: | Cluster doesn't start if GFS2 is mounted as "lock_nolock" | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Robert Peterson <rpeterso> | ||||
| Component: | cluster | Assignee: | David Teigland <teigland> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> | ||||
| Severity: | low | Docs Contact: | |||||
| Priority: | low | ||||||
| Version: | 6.4 | CC: | ccaulfie, cluster-maint, fdinitto, jpayne, lhh, mjuricek, rpeterso, teigland | ||||
| Target Milestone: | rc | ||||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | cluster-3.0.12.1-37.el6 | Doc Type: | Bug Fix | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2013-02-21 07:42:53 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
Created attachment 608257 [details]
Proposed and tested patch
pushed to cluster.git RHEL6 branch http://git.fedorahosted.org/cgit/cluster.git/commit/?h=RHEL6&id=6b7602b0f65268e2f09c87a314cda3947d839b35 Verified in cman-3.0.12.1-45
[root@dash-01 ~]# rpm -q cman
cman-3.0.12.1-32.el6.x86_64
[root@dash-01 ~]# mkfs.gfs2 -O -j1 -p lock_nolock -t dash:gfs2 /dev/sdb1 &> /dev/null
[root@dash-01 ~]# mount -t gfs2 /dev/sdb1 /mnt/gfs2/
[root@dash-01 ~]# service cman start
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... [ OK ]
Starting fenced... [ OK ]
Starting dlm_controld... [ OK ]
Starting gfs_controld... [ OK ]
Unfencing self... fence_node: cannot connect to cman
[FAILED]
Stopping cluster:
Leaving fence domain... [ OK ]
Stopping gfs_controld... [ OK ]
Stopping dlm_controld... [ OK ]
Stopping fenced... [ OK ]
Stopping cman... [ OK ]
Unloading kernel modules... [ OK ]
Unmounting configfs... [ OK ]
[root@dash-01 ~]# rpm -q cman
cman-3.0.12.1-45.el6.x86_64
[root@dash-01 ~]# mount -t gfs2 /dev/sdb1 /mnt/gfs2/
[root@dash-01 ~]# service cman start
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... [ OK ]
Starting fenced... [ OK ]
Starting dlm_controld... [ OK ]
Tuning DLM kernel config... [ OK ]
Starting gfs_controld... [ OK ]
Unfencing self... [ OK ]
Joining fence domain... [ OK ]
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-0287.html |
Description of problem: If you have a GFS2 file system mounted as "lock_nolock" you can't start the cluster software. Version-Release number of selected component (if applicable): 6.3 How reproducible: Always Steps to Reproduce: 1.mkfs.gfs2 -O -j1 -p lock_nolock -t intec_cluster:sas /dev/sasdrives/scratch &> /dev/null 2.mount -tgfs2 /dev/sasdrives/scratch /mnt/gfs2 3.service cman start Actual results: [root@intec2 ../bob/cluster.git/fence]# service cman start Starting cluster: Checking if cluster has been disabled at boot... [ OK ] Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Tuning DLM kernel hash tables... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... fence_tool: fenced not running, no lockfile [FAILED] Stopping cluster: Leaving fence domain... [ OK ] Stopping gfs_controld... [ OK ] Stopping dlm_controld... ^[[A [FAILED] [root@intec2 ../bob/cluster.git/fence]# Expected results: [root@intec2 ../group/gfs_controld]# service cman start Starting cluster: Checking if cluster has been disabled at boot... [ OK ] Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Tuning DLM kernel hash tables... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ] [root@intec2 ../group/gfs_controld]# Additional info: I have a working patch