Bug 252241 - self_fence missing from clusterfs.sh, preventing reboot if unmount fails
self_fence missing from clusterfs.sh, preventing reboot if unmount fails
Status: CLOSED ERRATA
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: rgmanager (Show other bugs)
4
All Linux
medium Severity medium
: ---
: ---
Assigned To: Lon Hohberger
Cluster QE
:
Depends On:
Blocks: 295781
  Show dependency treegraph
 
Reported: 2007-08-14 17:01 EDT by Corey Marthaler
Modified: 2009-04-16 16:22 EDT (History)
3 users (show)

See Also:
Fixed In Version: RHBA-2007-1000
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-11-21 16:53:17 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Fixes behavior (1.49 KB, patch)
2007-09-18 16:48 EDT, Lon Hohberger
no flags Details | Diff

  None (edit)
Description Corey Marthaler 2007-08-14 17:01:24 EDT
Description of problem:
While running service failover tests with nfs i/o going...

[derringer]
================================================================================
[derringer] Iteration 4 started at Tue Aug 14 09:42:48 CDT 2007
[derringer] Verifying that all services are started on all the nodes in the cluster
[derringer] Sleeping 2 minute(s) in between each relocation...
[derringer] Relocating nfs1 from link-02 to link-07
[derringer] Failed relocation attempt failed of service nfs1 to link-07


[root@link-02 ~]# clustat
Member Status: Quorate

  Member Name                              Status
  ------ ----                              ------
  link-02                                  Online, Local, rgmanager
  link-07                                  Online, rgmanager
  link-08                                  Online, rgmanager

  Service Name         Owner (Last)                   State
  ------- ----         ----- ------                   -----
  nfs1                 (link-02)                      failed


link-02:
Aug 14 09:43:28 link-02 qarshd[11138]: Running cmdline: clusvcadm -r nfs1 -m link-07
Aug 14 09:43:28 link-02 clurgmgrd[4920]: <notice> Stopping service nfs1
Aug 14 09:43:29 link-02 clurgmgrd: [4920]: <info> Removing IPv4 address
10.15.89.209 from eth0
Aug 14 09:43:39 link-02 clurgmgrd: [4920]: <info> Removing export: *:/mnt/link0
Aug 14 09:43:39 link-02 clurgmgrd: [4920]: <warning> Dropping node-wide NFS locks
Aug 14 09:43:39 link-02 clurgmgrd: [4920]: <info> Sending reclaim notifications
via link-02
Aug 14 09:43:39 link-02 rpc.statd[11267]: Version 1.0.6 Starting
Aug 14 09:43:39 link-02 rpc.statd[11267]: Flags: No-Daemon Notify-Only
Aug 14 09:43:42 link-02 rpc.statd[11267]: Caught signal 15, un-registering and
exiting.
Aug 14 09:43:42 link-02 clurgmgrd: [4920]: <info> unmounting
/dev/mapper/LINK_128-LINK_1280 (/mnt/link0)
Aug 14 09:43:42 link-02 clurgmgrd: [4920]: <notice> Forcefully unmounting /mnt/link0
Aug 14 09:43:47 link-02 clurgmgrd: [4920]: <info> unmounting
/dev/mapper/LINK_128-LINK_1280 (/mnt/link0)
Aug 14 09:43:47 link-02 clurgmgrd: [4920]: <notice> Forcefully unmounting /mnt/link0
Aug 14 09:43:47 link-02 clurgmgrd: [4920]: <err> 'umount
/dev/mapper/LINK_128-LINK_1280' failed (/mnt/link0), erro
r=0
Aug 14 09:43:47 link-02 clurgmgrd[4920]: <notice> stop on clusterfs:LINK_1280
returned 2 (invalid argument(s))
Aug 14 09:43:47 link-02 clurgmgrd: [4920]: <info> Removing export: *:/mnt/link1
Aug 14 09:43:47 link-02 clurgmgrd: [4920]: <info> unmounting /mnt/link1
Aug 14 09:43:47 link-02 clurgmgrd[4920]: <crit> #12: RG nfs1 failed to stop;
intervention required
Aug 14 09:43:47 link-02 clurgmgrd[4920]: <notice> Service nfs1 is failed
Aug 14 09:43:48 link-02 clurgmgrd[4920]: <alert> #2: Service nfs1 returned
failure code.  Last Owner: link-02
Aug 14 09:43:48 link-02 clurgmgrd[4920]: <alert> #4: Administrator intervention
required.



link-07:
Aug 14 09:35:32 link-07 clurgmgrd: [31175]: <info> Adding export: *:/mnt/link0
(fsid=3151,rw)
Aug 14 09:35:32 link-07 clurgmgrd: [31175]: <info> Adding IPv4 address
10.15.89.209 to eth0
Aug 14 09:35:33 link-07 clurgmgrd: [31175]: <info> Sending reclaim notifications
via link-07
Aug 14 09:35:33 link-07 rpc.statd[32332]: Version 1.0.6 Starting
Aug 14 09:35:33 link-07 rpc.statd[32332]: Flags: No-Daemon Notify-Only
Aug 14 09:35:36 link-07 rpc.statd[32332]: Caught signal 15, un-registering and
exiting.
Aug 14 09:35:36 link-07 clurgmgrd: [31175]: <info> Sending reclaim notifications
via rg-209.lab.msp.redhat.com
Aug 14 09:35:36 link-07 rpc.statd[32363]: Version 1.0.6 Starting
Aug 14 09:35:36 link-07 rpc.statd[32363]: Flags: No-Daemon Notify-Only
Aug 14 09:35:39 link-07 rpc.statd[32363]: Caught signal 15, un-registering and
exiting.
Aug 14 09:35:39 link-07 clurgmgrd[31175]: <notice> Service nfs1 started
Aug 14 09:38:00 link-07 clurgmgrd[31175]: <notice> Stopping service nfs1
Aug 14 09:38:00 link-07 clurgmgrd: [31175]: <info> Removing IPv4 address
10.15.89.209 from eth0
Aug 14 09:38:10 link-07 clurgmgrd: [31175]: <info> Removing export: *:/mnt/link0
Aug 14 09:38:10 link-07 clurgmgrd: [31175]: <warning> Dropping node-wide NFS locks
Aug 14 09:38:10 link-07 clurgmgrd: [31175]: <info> Sending reclaim notifications
via link-07
Aug 14 09:38:10 link-07 rpc.statd[2002]: Version 1.0.6 Starting
Aug 14 09:38:10 link-07 rpc.statd[2002]: Flags: No-Daemon Notify-Only
Aug 14 09:38:13 link-07 rpc.statd[2002]: Caught signal 15, un-registering and
exiting.
Aug 14 09:38:13 link-07 clurgmgrd: [31175]: <info> unmounting
/dev/mapper/LINK_128-LINK_1280 (/mnt/link0)
Aug 14 09:38:14 link-07 clurgmgrd: [31175]: <info> Removing export: *:/mnt/link1
Aug 14 09:38:14 link-07 clurgmgrd: [31175]: <info> unmounting /mnt/link1
Aug 14 09:38:14 link-07 clurgmgrd[31175]: <notice> Service nfs1 is stopped
Aug 14 09:43:47 link-07 clurgmgrd[31175]: <err> #43: Service nfs1 has failed;
can not start.



link-08:
Aug 14 09:38:17 link-08 clurgmgrd: [3697]: <info> Adding export: *:/mnt/link0
(fsid=3151,rw)
Aug 14 09:38:17 link-08 clurgmgrd: [3697]: <info> Adding IPv4 address
10.15.89.209 to eth0
Aug 14 09:38:18 link-08 clurgmgrd: [3697]: <info> Sending reclaim notifications
via link-08
Aug 14 09:38:18 link-08 rpc.statd[5053]: Version 1.0.6 Starting
Aug 14 09:38:18 link-08 rpc.statd[5053]: Flags: No-Daemon Notify-Only
Aug 14 09:38:21 link-08 rpc.statd[5053]: Caught signal 15, un-registering and
exiting.
Aug 14 09:38:21 link-08 clurgmgrd: [3697]: <info> Sending reclaim notifications
via rg-209.lab.msp.redhat.com
Aug 14 09:38:21 link-08 rpc.statd[5090]: Version 1.0.6 Starting
Aug 14 09:38:21 link-08 rpc.statd[5090]: Flags: No-Daemon Notify-Only
Aug 14 09:38:24 link-08 rpc.statd[5090]: Caught signal 15, un-registering and
exiting.
Aug 14 09:38:24 link-08 clurgmgrd[3697]: <notice> Service nfs1 started
Aug 14 09:40:44 link-08 clurgmgrd[3697]: <notice> Stopping service nfs1
Aug 14 09:40:45 link-08 clurgmgrd: [3697]: <info> Removing IPv4 address
10.15.89.209 from eth0
Aug 14 09:40:55 link-08 clurgmgrd: [3697]: <info> Removing export: *:/mnt/link0
Aug 14 09:40:55 link-08 clurgmgrd: [3697]: <warning> Dropping node-wide NFS locks
Aug 14 09:40:55 link-08 clurgmgrd: [3697]: <info> Sending reclaim notifications
via link-08
Aug 14 09:40:55 link-08 rpc.statd[7183]: Version 1.0.6 Starting
Aug 14 09:40:55 link-08 rpc.statd[7183]: Flags: No-Daemon Notify-Only
Aug 14 09:40:58 link-08 rpc.statd[7183]: Caught signal 15, un-registering and
exiting.
Aug 14 09:40:58 link-08 clurgmgrd: [3697]: <info> unmounting
/dev/mapper/LINK_128-LINK_1280 (/mnt/link0)
Aug 14 09:40:58 link-08 clurgmgrd: [3697]: <info> Removing export: *:/mnt/link1
Aug 14 09:40:58 link-08 clurgmgrd: [3697]: <info> unmounting /mnt/link1
Aug 14 09:40:58 link-08 clurgmgrd[3697]: <notice> Service nfs1 is stopped
Aug 14 09:43:48 link-08 clurgmgrd[3697]: <err> #43: Service nfs1 has failed; can
not start.


Version-Release number of selected component (if applicable):
2.6.9-55.0.3.ELsmp
rgmanager-1.9.68-1
Comment 1 Corey Marthaler 2007-08-14 17:03:17 EDT
Here's the resource section of the .conf file:

<rm>
    <failoverdomains>
      <failoverdomain name="LINK_128_domain" ordered="0" restricted="0">
        <failoverdomainnode name="link-02" priority="1"/>
        <failoverdomainnode name="link-07" priority="1"/>
        <failoverdomainnode name="link-08" priority="1"/>
      </failoverdomain>
    </failoverdomains>
    <resources>
      <ip address="10.15.89.209" monitor_link="1"/>
      <clusterfs device="/dev/LINK_128/LINK_1280" force_unmount="1"
self_fence="1" fsid="3151" fstype="gfs" mountp
oint="/mnt/link0" name="LINK_1280" options=""/>
      <fs device="/dev/LINK_128/LINK_1281" force_fsck="0" force_unmount="1"
self_fence="1" fsid="9968" fstype="ext
3" mountpoint="/mnt/link1" name="LINK_1281" options=""/>
      <nfsexport name="LINK_128 nfs exports"/>
      <nfsclient name="*" options="rw" target="*"/>
    </resources>
    <service autostart="1" domain="LINK_128_domain" name="nfs1" nfslock="1">
      <clusterfs ref="LINK_1280">
        <nfsexport ref="LINK_128 nfs exports">
          <nfsclient ref="*"/>
        </nfsexport>
      </clusterfs>
      <fs ref="LINK_1281">
        <nfsexport ref="LINK_128 nfs exports">
          <nfsclient ref="*"/>
        </nfsexport>
      </fs>
      <ip ref="10.15.89.209"/>
    </service>
  </rm>
Comment 2 Lon Hohberger 2007-08-15 09:39:56 EDT
There are really two issues here:

(1) A leaked lock or some other kind of reference prevented the unmount from
succeeding.  It's not likely an NFS lock, as even stopping nfsd/lockd wasn't
able to clean it up.  Fuser and lsof show no open refs on the file system, but
nothing can unmount it.

(2) rgmanager did NOT reboot the node even though unmount failed while
self_fence was specified.  As it turns out, clusterfs.sh does not support the
self_fence option (only fs.sh - the non-cluster one).

It's easy to add self_fence to clusterfs.sh.
Comment 3 Lon Hohberger 2007-08-15 09:42:47 EDT
I'm going to create a clone bug for the lock leak.
Comment 5 RHEL Product and Program Management 2007-08-28 10:27:34 EDT
This bugzilla has Keywords: Regression.  

Since no regressions are allowed between releases, 
it is also being proposed as a blocker for this release.  

Please resolve ASAP.
Comment 7 Lon Hohberger 2007-09-18 16:48:09 EDT
Created attachment 198851 [details]
Fixes behavior

Enables self_fence to work in clusterfs.sh resource agents.
Comment 10 errata-xmlrpc 2007-11-21 16:53:17 EST
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2007-1000.html

Note You need to log in before you can comment on or make changes to this bug.