+++ This bug was initially created as a clone of Bug #1406025 +++ Description of problem: When the node is deleted from existing ganesha cluster,volume entry of the exported volume gets deleted from /etc/ganesha/ganesha.conf file. As a result the existing node reflects the stale entry of the volume in showmount -e localhost Version-Release number of selected component (if applicable): # rpm -qa | grep ganesha nfs-ganesha-2.4.1-3.el7rhgs.x86_64 nfs-ganesha-gluster-2.4.1-3.el7rhgs.x86_64 glusterfs-ganesha-3.8.4-9.el7rhgs.x86_64 How reproducible: Consistently Steps to Reproduce: 1.Create 6 node ganesha cluster on 8 node gluster.Enable ganesha on it 2.Delete 1 node from existing ganesha cluster /usr/libexec/ganesha/ganesha-ha.sh --delete /var/run/gluster/shared_storage/nfs-ganesha/ dhcp46-232.lab.eng.blr.redhat.com 3.Check /etc/ganesha/ganesha.conf file. Actual results: Check /etc/ganesha/ganesha.conf file.Volume which was previously exported gets deleted from this file. showmount -e localhost reflects stale entry of previously exported volume In this case if the node is deleted and again added,previously exported volume (before node delete) is not exported on the node which is added. Expected results: It should not modify the /etc/ganesha/ganesha.conf file. Volume should be exported on the newly added node Additional info: --- Additional comment from Soumya Koduri on 2016-12-19 11:54:27 EST --- create_ganesha_conf_file() { if [ $1 == "yes" ]; then if [ -e $GANESHA_CONF ]; then rm -rf $GANESHA_CONF fi # The symlink /etc/ganesha/ganesha.conf need to be # created using ganesha conf file mentioned in the # shared storage. Every node will only have this # link and actual file will stored in shared storage, # so that ganesha conf editing of ganesha conf will # be easy as well as it become more consistent. ln -s $HA_CONFDIR/ganesha.conf $GANESHA_CONF else # Restoring previous file rm -rf $GANESHA_CONF sed -r -i -e '/^%include[[:space:]]+".+\.conf"$/d' $HA_CONFDIR/ganesha.conf ^^^^ This line (which could have removed $VOL.conf in ganesha.conf file) is not necessary IMO. Jiffin would be best to comment --- Additional comment from Jiffin on 2016-12-19 13:16:25 EST --- (In reply to Soumya Koduri from comment #2) > > create_ganesha_conf_file() > > { > if [ $1 == "yes" ]; > > then > if [ -e $GANESHA_CONF ]; > > then > rm -rf $GANESHA_CONF > > fi > # The symlink /etc/ganesha/ganesha.conf need to be > > # created using ganesha conf file mentioned in the > > # shared storage. Every node will only have this > # link and actual file will stored in shared storage, > > # so that ganesha conf editing of ganesha conf will > > # be easy as well as it become more consistent. > > > ln -s $HA_CONFDIR/ganesha.conf $GANESHA_CONF > > else > # Restoring previous file > rm -rf $GANESHA_CONF > sed -r -i -e '/^%include[[:space:]]+".+\.conf"$/d' > $HA_CONFDIR/ganesha.conf > > > ^^^^ This line (which could have removed $VOL.conf in ganesha.conf file) is > not necessary IMO. Jiffin would be best to comment Yes above mentioned is absolutely correct. Based on that I have given workaround in Manisha's setup. But didn't work out as expected. Tomorrow I will check again
REVIEW: http://review.gluster.org/16209 (ganesha/scripts : Prevent removal of entries in ganesha.conf during deletion of a node) posted (#1) for review on master by jiffin tony Thottan (jthottan)
COMMIT: http://review.gluster.org/16209 committed in master by Kaleb KEITHLEY (kkeithle) ------ commit 8b42e1b5688f8600086ecc0e33ac4abf5e7c2772 Author: Jiffin Tony Thottan <jthottan> Date: Tue Dec 20 10:42:31 2016 +0530 ganesha/scripts : Prevent removal of entries in ganesha.conf during deletion of a node Change-Id: Ia6c653eeb9bef7ff4107757f845218c2316db2e4 BUG: 1406249 Signed-off-by: Jiffin Tony Thottan <jthottan> Reviewed-on: http://review.gluster.org/16209 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: soumya k <skoduri>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html [2] https://www.gluster.org/pipermail/gluster-users/