Bug 1398257

Summary: [GANESHA] Export ID changed during volume start and stop with message "lookup_export failed with Export id not found" in ganesha.log
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Manisha Saini <msaini>
Component: nfs-ganeshaAssignee: Jiffin <jthottan>
Status: CLOSED ERRATA QA Contact: Manisha Saini <msaini>
Severity: unspecified Docs Contact:
Priority: high    
Version: rhgs-3.2CC: amukherj, jthottan, kkeithle, ndevos, rcyriac, rhs-bugs, skoduri, storage-qa-internal
Target Milestone: ---   
Target Release: RHGS 3.2.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.8.4-7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1399186 (view as bug list) Environment:
Last Closed: 2017-03-23 05:50:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1401806    
Bug Blocks: 1351528, 1399186, 1401011, 1401016    

Description Manisha Saini 2016-11-24 10:56:31 UTC
Description of problem:
Volume is still visible under export list #showmount -e localhost even after volume is in stopped state.
Export ID of the nodes are different for the same export volume while performing volume start and stop


Version-Release number of selected component (if applicable):
glusterfs-3.8.4-5.el7rhgs.x86_64

# rpm -qa | grep ganesha
glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64
nfs-ganesha-debuginfo-2.4.1-1.el7rhgs.x86_64
nfs-ganesha-2.4.1-1.el7rhgs.x86_64
nfs-ganesha-gluster-2.4.1-1.el7rhgs.x86_64


How reproducible:

Consistently


Steps to Reproduce:
1.Create and start the volume 

gluster volume create volganesha1 replica 2 10.70.47.3:/mnt/data1/b1 10.70.47.159:/mnt/data1/b1 10.70.46.241:/mnt/data1/b1 10.70.46.219:/mnt/data1/b1/ 10.70.47.3:/mnt/data2/b2 10.70.47.159:/mnt/data2/b2 10.70.46.241:/mnt/data2/b2 10.70.46.219:/mnt/data2/b2/ 10.70.47.3:/mnt/data3/b3 10.70.47.159:/mnt/data3/b3 10.70.46.241:/mnt/data3/b3 10.70.46.219:/mnt/data3/b3/

2.gluster vol set volganesha1 ganesha.enable on

3.showmount -e localhost 

# showmount -e localhost
Export list for localhost:
/volganesha1 (everyone)

4.Checked export id on all four nodes.Export ID was 2
# dbus-send --type=method_call --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.ShowExports

5.Stop the volume

6.Check Showmount -e localhost
No volume is exported

7.Again start the same volume volganesha1

8.Check showmount -e localhost.
Volume is exported on all the 4 nodes with ID=2

9.Repeat steps from 5 to 8 for 3 times.Each Time when the volume is started ,Export ID is 2.

10.Stop the volume volganesha1

11.Delete the Volume volganesha1

12.Create and start a new Volume volganesha2

13.gluster vol set volganesha2 ganesha.enable on

14.Check showmount -e localhost.Volume is exported with ID=3 on all the four nodes

15.Stop the volume volganesha2.Check showmount -e localhost.Volume is not there.

16.Again start the same volume volganesha2.Check showmount -e localhost.Volume is exported but with DIFFERENT EXPORT IP.

Out of 4 nodes,three nodes have export ID 3 and one node has export ID as 4.

17.Stop the volume volganesha2.


Actual Result:

After stopping the volume,the volume is still visible in showmount -e localhost on three nodes out of four

Export ID is different on the nodes

Observed the following messages in /var/log/ganesha.log

24/11/2016 15:36:44 : epoch 82a00000 : dhcp46-219.lab.eng.blr.redhat.com : ganesha.nfsd-18193[dbus_heartbeat] glusterfs_create_export :FSAL :EVENT :Volume volganesha2 exported at : '/'
24/11/2016 15:37:58 : epoch 82a00000 : dhcp46-219.lab.eng.blr.redhat.com : ganesha.nfsd-18193[dbus_heartbeat] dbus_message_entrypoint :DBUS :MAJ :Method (RemoveExport) on (org.ganesha.nfsd.exportmgr) failed: name = (org.freedesktop.DBus.Error.InvalidArgs), message = (lookup_export failed with Export id not found)


Expected Result:
After stopping the volume should be unexported successfully.
Export ID should be same in all the 4 nodes when the volume is exported.

Comment 3 Jiffin 2016-11-28 14:00:01 UTC
The patch posted upstream for review http://review.gluster.org/#/c/15948/

Comment 4 surabhi 2016-11-29 08:55:53 UTC
As discussed in bug triage meeting ,providing qa_ack

Comment 7 Jiffin 2016-12-02 14:40:41 UTC
Patch got merged upstream master

Downstream patch link
https://code.engineering.redhat.com/gerrit/#/c/92001/1

Comment 9 Manisha Saini 2016-12-06 09:24:00 UTC

Verification for this bug can be done if volume start and stop works fine.Currently Bug 1401806 addresses the issue and hence marking this bug dependent on the other for its complete verification.

Comment 10 Manisha Saini 2016-12-12 11:41:29 UTC
Verified this Bug on 

[root@dhcp47-3 tmp]# rpm -qa | grep ganesha
nfs-ganesha-2.4.1-2.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-8.el7rhgs.x86_64
nfs-ganesha-gluster-2.4.1-2.el7rhgs.x86_64

As the issue is no more observed,Hence marking this bug as Verified.

Comment 12 errata-xmlrpc 2017-03-23 05:50:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html