Bug 709398

Summary: [NetApp CQ184312]rgmanager is unable to unmount filesystems when a cluster service is disabled
Product: Red Hat Enterprise Linux 6 Reporter: Sean Stewart <Sean.Stewart>
Component: rgmanagerAssignee: Lon Hohberger <lhh>
Status: CLOSED ERRATA QA Contact: Cluster QE <mspqa-list>
Severity: high Docs Contact:
Priority: high    
Version: 6.1CC: cluster-maint, djansa, fdinitto, mjuricek
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: rgmanager-3.0.12.1-1.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2011-12-06 11:59:44 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Sean Stewart 2011-05-31 15:37:56 UTC
Description of problem:
During the shutdown of a service group, a cluster node removes the virtual IP address from its ethernet interface, removes the export, and then tries to unmount the logical volume.  When it tries to unmount the logical volume, rgmanager debug reports that the mount point is "still in use by 1 other service(s)".  The service group is disabled, but the mount points remain on the node. If I manually unmount the logical volumes at this point, it succeeds.  It should also be noted that the force unmount option is specified as on for each of the mount points. This should kill the processes using the mount point and allow it to succeed, but does not.

Version-Release number of selected component (if applicable):
3.0.12-11

How reproducible:
Always

Steps to Reproduce:
1. Create a cluster configuration with the service groups set up like in the attached cluster.conf file
2. Enable a service group on a node
3. Disable the service group
  
Actual results:
When disabling the service group, it fails to unmount the mount points associated with the service group. After the service group is disabled, if I manually run "umount /home/smashmnt04", it succeeds. If the service group is started on another node, this results in the same mount point being mounted on more than one node at a time.
During the service group shutdown, rgmanager reports:
<debug>  Not unmounting clusterfs:lvol4 - still in use by 1 other service(s)
Not unmounting clusterfs:lvol4 - still in use by 1 other service(s)

Expected results:
In RHEL6.0, this exact configuration would result in the mount points being unmounted when the service "service-192.168.20.100-1" was disabled.

Additional info:
The exact same configuration on RHEL5 (U5, U6), and RHEL6.0 has been tested and does not experience this problem.

Additionally, I tried configuring a service group as follows, but this resulted in exactly the same problem:

<service domain="kswc-achilles1" exclusive="1" name="service-192.168.20.100-1" recovery="relocate">
<clusterfs ref="lvol0">
<nfsexport ref="ex-sm00">
<nfsclient ref="@local-sm00">
<nfsclient ref="@client-sm00">
<ip ref="192.168.20.100"/>
</nfsclient>
</nfsclient>
</nfsexport>
</clusterfs>
<clusterfs ref="lvol4">
<nfsexport ref="ex-sm04">
<nfsclient ref="@local-sm04">
<nfsclient ref="@client-sm04">
<ip ref="192.168.20.100"/>
</nfsclient>
</nfsclient>
</nfsexport>
</clusterfs>
</service>

Comment 2 Lon Hohberger 2011-05-31 21:31:32 UTC
This is fixed upstream and will be included as a component of a planned rebase of rgmanager in the next release of Red Hat Enterprise Linux 6:

http://git.fedorahosted.org/git?p=cluster.git;a=commit;h=c1d789ec9c1652eff3150dad56f5f1e3a90d0ef7

Comment 3 Lon Hohberger 2011-05-31 21:32:56 UTC
See bug 707118

Comment 4 Sean Stewart 2011-05-31 21:56:03 UTC
Is there any easy way to work around this problem? I'd hate for us not to be able to support Red Hat native cluster for 6.1.  

For example, I noticed in the clusterfs.sh script, it says in the comments that it should unmount if the reference count is greater than or equal to 1, but the conditional below that is testing for strictly greater than 1.  If i change it to greater than or equal, it unmounts as expected.  I'd bet this isn't a feasible solution, though, because it probably could have some other repercussions.

Comment 9 Lon Hohberger 2011-07-21 14:13:31 UTC
You could edit the 'clusterfs' agent if you wanted to alter the reference count handling there.

Otherwise, there is no specific workaround currently.  It's not considered "bad" to leave a clustered file system mounted.

Comment 11 errata-xmlrpc 2011-12-06 11:59:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2011-1595.html