Bug 1389117 - nfs mounts are duplicated when server is a cluster NAS (eg isilon)
Summary: nfs mounts are duplicated when server is a cluster NAS (eg isilon)
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: nfs-utils
Version: 6.8
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Steve Dickson
QA Contact: Yongcheng Yang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-10-26 21:49 UTC by tin.ho
Modified: 2017-12-06 11:34 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-06 11:34:39 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description tin.ho 2016-10-26 21:49:12 UTC
Description of problem:

mount (and /etc/mtab) shows that NFS mounts against a clustered NFS server (isilon OneFS 7.x) are mounted twice.  Reboot shows problem persist.
Mount appear that the same directory from different IP address are mounted and shadow a previous mount, even when the entry is listed only once in /etc/fstab.

Removing "nodev" clause in fstab seems to help.  disabling nscd from starting at boot seems to help.

Host is a VM, in ESX
VMware Tools daemon, version 8.6.12.28992 (build-1480661)


Version-Release number of selected component (if applicable):
Red Hat Enterprise Linux Server release 6.8 (Santiago)
nfs-utils-1.2.3-70.el6_8.1.x86_64

How reproducible:
Fairly reproducible, have two machines that have this problem consistently.  However, other VM in the environment don't have this.

Steps to Reproduce:
1. Reboot
2. cat /etc/mtab
3.

Actual results:
should see one mount per NFS entry in /etc/fstab


Expected results:
see two mount entries per NFS entry specified in /etc/fstab, only against a clustered NFS server Isilon, not on single-machine NFS server like NetApp


Additional info:

Comment 3 J. Adam Craig 2017-01-31 18:16:01 UTC
Confirmed with same symptoms, also using Isilon OneFS cluster while mounting NFS storage via SmartConnect solution.

Wondering if this is a "bug," or a "feature" for failover?  The symptom does not appear to exist on NFS clients with similar configuration running RHEL 7.x.

Both 'nscd' and 'nodev' are disabled on our RHEL 6.x clients, however, we can confirm the issue.

Comment 4 J. Adam Craig 2017-01-31 19:50:16 UTC
I seem to have been able to isolate this behavior to the use of NetworkManager.  The following procedure eliminates the duplicate mounting for me:

1) Disable the 'NetworkManager' service:
   # chkconfig NetworkManager off

2) In each of the '/etc/sysconfig/network-scripts/ifcfg-*' interface configuration files, set 'NM_CONTROLLED=no'.

3) Reboot and confirm expected behavior.

Again, I would be curious to learn if this behavior is truly a "bug," or if this is an intentional measure to attempt to ensure a capability for failover (i.e., mount the same clustered NFS export to the same mountpoint twice using different NFS server IPs, then if the 'active' node gets killed, there should be another 'active' node sitting under it, ready to service the export).

Comment 5 tin.ho 2017-02-08 00:24:15 UTC
Thank you for the detective work!
Disabling NetworkManager from starting indeed made the double mounts go away upon reboot.  It is acceptable solution for my server environment where NM and GUI isn't needed.  

Though I would still argue the behavior is a bug and not a feature.  The network remained the same thru the process (it is static IP a VM on ESX, not roaming in WiFi).  I don't see the value of having an underlaying mount in case of any "higher" network goes away.  

Much thanks again.
Tin

Comment 7 Jan Kurik 2017-12-06 11:34:39 UTC
Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available.

The official life cycle policy can be reviewed here:

http://redhat.com/rhel/lifecycle

This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL:

https://access.redhat.com/


Note You need to log in before you can comment on or make changes to this bug.