Description of problem: If you have a GFS2 file system mounted as "lock_nolock" you can't start the cluster software. Version-Release number of selected component (if applicable): 6.3 How reproducible: Always Steps to Reproduce: 1.mkfs.gfs2 -O -j1 -p lock_nolock -t intec_cluster:sas /dev/sasdrives/scratch &> /dev/null 2.mount -tgfs2 /dev/sasdrives/scratch /mnt/gfs2 3.service cman start Actual results: [root@intec2 ../bob/cluster.git/fence]# service cman start Starting cluster: Checking if cluster has been disabled at boot... [ OK ] Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Tuning DLM kernel hash tables... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... fence_tool: fenced not running, no lockfile [FAILED] Stopping cluster: Leaving fence domain... [ OK ] Stopping gfs_controld... [ OK ] Stopping dlm_controld... ^[[A [FAILED] [root@intec2 ../bob/cluster.git/fence]# Expected results: [root@intec2 ../group/gfs_controld]# service cman start Starting cluster: Checking if cluster has been disabled at boot... [ OK ] Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Tuning DLM kernel hash tables... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ] [root@intec2 ../group/gfs_controld]# Additional info: I have a working patch
Created attachment 608257 [details] Proposed and tested patch
pushed to cluster.git RHEL6 branch http://git.fedorahosted.org/cgit/cluster.git/commit/?h=RHEL6&id=6b7602b0f65268e2f09c87a314cda3947d839b35
Verified in cman-3.0.12.1-45 [root@dash-01 ~]# rpm -q cman cman-3.0.12.1-32.el6.x86_64 [root@dash-01 ~]# mkfs.gfs2 -O -j1 -p lock_nolock -t dash:gfs2 /dev/sdb1 &> /dev/null [root@dash-01 ~]# mount -t gfs2 /dev/sdb1 /mnt/gfs2/ [root@dash-01 ~]# service cman start Starting cluster: Checking if cluster has been disabled at boot... [ OK ] Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... fence_node: cannot connect to cman [FAILED] Stopping cluster: Leaving fence domain... [ OK ] Stopping gfs_controld... [ OK ] Stopping dlm_controld... [ OK ] Stopping fenced... [ OK ] Stopping cman... [ OK ] Unloading kernel modules... [ OK ] Unmounting configfs... [ OK ] [root@dash-01 ~]# rpm -q cman cman-3.0.12.1-45.el6.x86_64 [root@dash-01 ~]# mount -t gfs2 /dev/sdb1 /mnt/gfs2/ [root@dash-01 ~]# service cman start Starting cluster: Checking if cluster has been disabled at boot... [ OK ] Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Tuning DLM kernel config... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ]
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-0287.html