Bug 1399186 - [GANESHA] Export ID changed during volume start and stop with message "lookup_export failed with Export id not found" in ganesha.log
Summary: [GANESHA] Export ID changed during volume start and stop with message "lookup...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
Assignee: Jiffin
QA Contact:
URL:
Whiteboard:
Depends On: 1398257
Blocks: 1401011 1401016
TreeView+ depends on / blocked
 
Reported: 2016-11-28 13:57 UTC by Jiffin
Modified: 2017-03-06 17:37 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.10.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1398257
: 1401011 (view as bug list)
Environment:
Last Closed: 2017-03-06 17:37:12 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Jiffin 2016-11-28 13:57:22 UTC
+++ This bug was initially created as a clone of Bug #1398257 +++

Description of problem:
Volume is still visible under export list #showmount -e localhost even after volume is in stopped state.
Export ID of the nodes are different for the same export volume while performing volume start and stop


Version-Release number of selected component (if applicable):
glusterfs-3.8.4-5.el7rhgs.x86_64

# rpm -qa | grep ganesha
glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64
nfs-ganesha-debuginfo-2.4.1-1.el7rhgs.x86_64
nfs-ganesha-2.4.1-1.el7rhgs.x86_64
nfs-ganesha-gluster-2.4.1-1.el7rhgs.x86_64


How reproducible:

Consistently


Steps to Reproduce:
1.Create and start the volume 

gluster volume create volganesha1 replica 2 10.70.47.3:/mnt/data1/b1 10.70.47.159:/mnt/data1/b1 10.70.46.241:/mnt/data1/b1 10.70.46.219:/mnt/data1/b1/ 10.70.47.3:/mnt/data2/b2 10.70.47.159:/mnt/data2/b2 10.70.46.241:/mnt/data2/b2 10.70.46.219:/mnt/data2/b2/ 10.70.47.3:/mnt/data3/b3 10.70.47.159:/mnt/data3/b3 10.70.46.241:/mnt/data3/b3 10.70.46.219:/mnt/data3/b3/

2.gluster vol set volganesha1 ganesha.enable on

3.showmount -e localhost 

# showmount -e localhost
Export list for localhost:
/volganesha1 (everyone)

4.Checked export id on all four nodes.Export ID was 2
# dbus-send --type=method_call --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.ShowExports

5.Stop the volume

6.Check Showmount -e localhost
No volume is exported

7.Again start the same volume volganesha1

8.Check showmount -e localhost.
Volume is exported on all the 4 nodes with ID=2

9.Repeat steps from 5 to 8 for 3 times.Each Time when the volume is started ,Export ID is 2.

10.Stop the volume volganesha1

11.Delete the Volume volganesha1

12.Create and start a new Volume volganesha2

13.gluster vol set volganesha2 ganesha.enable on

14.Check showmount -e localhost.Volume is exported with ID=3 on all the four nodes

15.Stop the volume volganesha2.Check showmount -e localhost.Volume is not there.

16.Again start the same volume volganesha2.Check showmount -e localhost.Volume is exported but with DIFFERENT EXPORT IP.

Out of 4 nodes,three nodes have export ID 3 and one node has export ID as 4.

17.Stop the volume volganesha2.


Actual Result:

After stopping the volume,the volume is still visible in showmount -e localhost on three nodes out of four

Export ID is different on the nodes

Observed the following messages in /var/log/ganesha.log

24/11/2016 15:36:44 : epoch 82a00000 : dhcp46-219.lab.eng.blr.redhat.com : ganesha.nfsd-18193[dbus_heartbeat] glusterfs_create_export :FSAL :EVENT :Volume volganesha2 exported at : '/'
24/11/2016 15:37:58 : epoch 82a00000 : dhcp46-219.lab.eng.blr.redhat.com : ganesha.nfsd-18193[dbus_heartbeat] dbus_message_entrypoint :DBUS :MAJ :Method (RemoveExport) on (org.ganesha.nfsd.exportmgr) failed: name = (org.freedesktop.DBus.Error.InvalidArgs), message = (lookup_export failed with Export id not found)


Expected Result:
After stopping the volume should be unexported successfully.
Export ID should be same in all the 4 nodes when the volume is exported.

--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-11-24 05:56:35 EST ---

This bug is automatically being proposed for the current release of Red Hat Gluster Storage 3 under active development, by setting the release flag 'rhgs‑3.2.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Manisha Saini on 2016-11-24 07:21:29 EST ---

sosreport present at -

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1398257/

Comment 1 Worker Ant 2016-11-28 13:58:18 UTC
REVIEW: http://review.gluster.org/15948 (ganesha/scripts : avoid incrementing Export Id value for already exported volumes) posted (#1) for review on master by jiffin tony Thottan (jthottan)

Comment 2 Worker Ant 2016-11-29 09:04:42 UTC
REVIEW: http://review.gluster.org/15948 (ganesha/scripts : avoid incrementing Export Id value for already exported volumes) posted (#2) for review on master by jiffin tony Thottan (jthottan)

Comment 3 Worker Ant 2016-12-01 18:38:51 UTC
COMMIT: http://review.gluster.org/15948 committed in master by Kaleb KEITHLEY (kkeithle) 
------
commit 76eef16d762f500df500de0d3187aff23dc39ac6
Author: Jiffin Tony Thottan <jthottan>
Date:   Mon Nov 28 19:18:51 2016 +0530

    ganesha/scripts : avoid incrementing Export Id value for already exported volumes
    
    Currently a volume will unexport when it stops and reexport it during volume start
    using hook script. And also it increments the value for export id for each reexport.
    Since a hook script is called from every node parallely which may led inconsistency
    for export id value.
    
    Change-Id: Ib9f19a3172b2ade29a3b4edc908b3267c68c0b20
    BUG: 1399186
    Signed-off-by: Jiffin Tony Thottan <jthottan>
    Reviewed-on: http://review.gluster.org/15948
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: soumya k <skoduri>
    Reviewed-by: Kaleb KEITHLEY <kkeithle>

Comment 4 Shyamsundar 2017-03-06 17:37:12 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.