+++ This bug was initially created as a clone of Bug #1401011 +++ +++ This bug was initially created as a clone of Bug #1399186 +++ +++ This bug was initially created as a clone of Bug #1398257 +++ Description of problem: Volume is still visible under export list #showmount -e localhost even after volume is in stopped state. Export ID of the nodes are different for the same export volume while performing volume start and stop Version-Release number of selected component (if applicable): glusterfs-3.8.4-5.el7rhgs.x86_64 # rpm -qa | grep ganesha glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64 nfs-ganesha-debuginfo-2.4.1-1.el7rhgs.x86_64 nfs-ganesha-2.4.1-1.el7rhgs.x86_64 nfs-ganesha-gluster-2.4.1-1.el7rhgs.x86_64 How reproducible: Consistently Steps to Reproduce: 1.Create and start the volume gluster volume create volganesha1 replica 2 10.70.47.3:/mnt/data1/b1 10.70.47.159:/mnt/data1/b1 10.70.46.241:/mnt/data1/b1 10.70.46.219:/mnt/data1/b1/ 10.70.47.3:/mnt/data2/b2 10.70.47.159:/mnt/data2/b2 10.70.46.241:/mnt/data2/b2 10.70.46.219:/mnt/data2/b2/ 10.70.47.3:/mnt/data3/b3 10.70.47.159:/mnt/data3/b3 10.70.46.241:/mnt/data3/b3 10.70.46.219:/mnt/data3/b3/ 2.gluster vol set volganesha1 ganesha.enable on 3.showmount -e localhost # showmount -e localhost Export list for localhost: /volganesha1 (everyone) 4.Checked export id on all four nodes.Export ID was 2 # dbus-send --type=method_call --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.ShowExports 5.Stop the volume 6.Check Showmount -e localhost No volume is exported 7.Again start the same volume volganesha1 8.Check showmount -e localhost. Volume is exported on all the 4 nodes with ID=2 9.Repeat steps from 5 to 8 for 3 times.Each Time when the volume is started ,Export ID is 2. 10.Stop the volume volganesha1 11.Delete the Volume volganesha1 12.Create and start a new Volume volganesha2 13.gluster vol set volganesha2 ganesha.enable on 14.Check showmount -e localhost.Volume is exported with ID=3 on all the four nodes 15.Stop the volume volganesha2.Check showmount -e localhost.Volume is not there. 16.Again start the same volume volganesha2.Check showmount -e localhost.Volume is exported but with DIFFERENT EXPORT IP. Out of 4 nodes,three nodes have export ID 3 and one node has export ID as 4. 17.Stop the volume volganesha2. Actual Result: After stopping the volume,the volume is still visible in showmount -e localhost on three nodes out of four Export ID is different on the nodes Observed the following messages in /var/log/ganesha.log 24/11/2016 15:36:44 : epoch 82a00000 : dhcp46-219.lab.eng.blr.redhat.com : ganesha.nfsd-18193[dbus_heartbeat] glusterfs_create_export :FSAL :EVENT :Volume volganesha2 exported at : '/' 24/11/2016 15:37:58 : epoch 82a00000 : dhcp46-219.lab.eng.blr.redhat.com : ganesha.nfsd-18193[dbus_heartbeat] dbus_message_entrypoint :DBUS :MAJ :Method (RemoveExport) on (org.ganesha.nfsd.exportmgr) failed: name = (org.freedesktop.DBus.Error.InvalidArgs), message = (lookup_export failed with Export id not found) Expected Result: After stopping the volume should be unexported successfully. Export ID should be same in all the 4 nodes when the volume is exported. --- Additional comment from Red Hat Bugzilla Rules Engine on 2016-11-24 05:56:35 EST --- This bug is automatically being proposed for the current release of Red Hat Gluster Storage 3 under active development, by setting the release flag 'rhgs‑3.2.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. --- Additional comment from Manisha Saini on 2016-11-24 07:21:29 EST --- sosreport present at - http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1398257/ --- Additional comment from Worker Ant on 2016-11-28 08:58:18 EST --- REVIEW: http://review.gluster.org/15948 (ganesha/scripts : avoid incrementing Export Id value for already exported volumes) posted (#1) for review on master by jiffin tony Thottan (jthottan) --- Additional comment from Worker Ant on 2016-11-29 04:04:42 EST --- REVIEW: http://review.gluster.org/15948 (ganesha/scripts : avoid incrementing Export Id value for already exported volumes) posted (#2) for review on master by jiffin tony Thottan (jthottan) --- Additional comment from Worker Ant on 2016-12-01 13:38:51 EST --- COMMIT: http://review.gluster.org/15948 committed in master by Kaleb KEITHLEY (kkeithle) ------ commit 76eef16d762f500df500de0d3187aff23dc39ac6 Author: Jiffin Tony Thottan <jthottan> Date: Mon Nov 28 19:18:51 2016 +0530 ganesha/scripts : avoid incrementing Export Id value for already exported volumes Currently a volume will unexport when it stops and reexport it during volume start using hook script. And also it increments the value for export id for each reexport. Since a hook script is called from every node parallely which may led inconsistency for export id value. Change-Id: Ib9f19a3172b2ade29a3b4edc908b3267c68c0b20 BUG: 1399186 Signed-off-by: Jiffin Tony Thottan <jthottan> Reviewed-on: http://review.gluster.org/15948 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: soumya k <skoduri> Reviewed-by: Kaleb KEITHLEY <kkeithle> --- Additional comment from Worker Ant on 2016-12-02 09:55:12 EST --- REVIEW: http://review.gluster.org/16013 (ganesha/scripts : avoid incrementing Export Id value for already exported volumes) posted (#1) for review on release-3.9 by jiffin tony Thottan (jthottan)
Jiffin - can the fix be backported to release-3.8 branch?
In 3.8 the export id will be incremented for each volume restart(was there in day one). By the integration with shared_storage we are no longer required to change the export id for a volume. We might hit similar kind of issue in 3.8 but fix is not straight forward from master