Bug 1416371 - NFS-Ganesha: Volume gets unexported on localhost if vol stop fails.
Summary: NFS-Ganesha: Volume gets unexported on localhost if vol stop fails.
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: common-ha
Version: rhgs-3.2
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: Jiffin
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On: 1416414
Blocks: 1351530
TreeView+ depends on / blocked
 
Reported: 2017-01-25 11:32 UTC by Ambarish
Modified: 2019-05-13 11:34 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
If "gluster volume stop" operation on a volume exported via NFS-ganesha server fails, there is a probability that the volume will get unexported on few nodes, inspite of the command failure. This will lead to inconsistent state across the NFS-ganesha cluster. Workaround: To restore the cluster back to normal state, perform the following - * Identify the nodes where the volume got unexported * Re-export the volume manually using the following dbus command: # dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/var/run/gluster/shared_storage/nfs-ganesha/exports/export.<volname>.conf string:"EXPORT(Path=/<volname>)"
Clone Of:
Environment:
Last Closed: 2019-05-13 11:34:28 UTC
Embargoed:


Attachments (Terms of Use)

Description Ambarish 2017-01-25 11:32:19 UTC
Description of problem:
-----------------------

Vol stop fails when another gluster operstion,say rebalance is running(which is  expected).

But the volume stop unexports the volume on localhost before it fails on CLI.

It'd be good to do the "unexport" part at the end of the Staging process.

Version-Release number of selected component (if applicable):
-------------------------------------------------------------

nfs-ganesha - 2.4.1.-6

How reproducible:
-----------------

100%

Steps to Reproduce:
-------------------

1. Trigger rebalance
2. Stop the volume exported via Ganesha
3. Check showmount on localhots

Actual results:
---------------

The volume gets unexportd.

Expected results:
-----------------

Volume should not get unexported.

Additional info:
----------------

More details in https://bugzilla.redhat.com/show_bug.cgi?id=1415630 .The issue was uncovered during that scenario.

Comment 4 Bhavana 2017-03-13 16:13:05 UTC
Hi Soumya,

I have edited the doc text for the release notes, but i need a little more clarity wrt the second sentence "the volume gets unexported in the node where the command is executed, but node still have volume being exported"

Comment 5 Soumya Koduri 2017-03-14 04:55:36 UTC
Hi Bhavana,

I made few updates to the doc text. Please check the same -

>>>
If "gluster volume stop" operation on a volume exported via NFS-ganesha server fails, there is a probability that the volume shall get unexported on few nodes in spite of the command failure. This shall lead to inconsistent state across the NFS-ganesha cluster.

Workaround: 
To restore the cluster back to normal state, perform the following -
* identify the nodes where in the volume got unexported
* re-export the volume manually using the below dbus command -

# dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/var/run/gluster/shared_storage/nfs-ganesha/exports/export.<volname>.conf string:"EXPORT(Path=/<volname>)"
<<<

Comment 6 Bhavana 2017-03-14 10:08:18 UTC
Thanks Soumya.

Slightly edited the doc text for the release notes.


Note You need to log in before you can comment on or make changes to this bug.