Bug 1416371
Summary: | NFS-Ganesha: Volume gets unexported on localhost if vol stop fails. | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Ambarish <asoman> |
Component: | common-ha | Assignee: | Jiffin <jthottan> |
Status: | CLOSED WONTFIX | QA Contact: | Manisha Saini <msaini> |
Severity: | low | Docs Contact: | |
Priority: | low | ||
Version: | rhgs-3.2 | CC: | amukherj, bmohanra, bturner, jthottan, rcyriac, rhinduja, rhs-bugs, skoduri, storage-qa-internal |
Target Milestone: | --- | Keywords: | FutureFeature, ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Known Issue | |
Doc Text: |
If "gluster volume stop" operation on a volume exported via NFS-ganesha server fails, there is a probability that the volume will get unexported on few nodes, inspite of the command failure. This will lead to inconsistent state across the NFS-ganesha cluster.
Workaround:
To restore the cluster back to normal state, perform the following -
* Identify the nodes where the volume got unexported
* Re-export the volume manually using the following dbus command:
# dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/var/run/gluster/shared_storage/nfs-ganesha/exports/export.<volname>.conf string:"EXPORT(Path=/<volname>)"
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2019-05-13 11:34:28 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1416414 | ||
Bug Blocks: | 1351530 |
Description
Ambarish
2017-01-25 11:32:19 UTC
Hi Soumya, I have edited the doc text for the release notes, but i need a little more clarity wrt the second sentence "the volume gets unexported in the node where the command is executed, but node still have volume being exported" Hi Bhavana,
I made few updates to the doc text. Please check the same -
>>>
If "gluster volume stop" operation on a volume exported via NFS-ganesha server fails, there is a probability that the volume shall get unexported on few nodes in spite of the command failure. This shall lead to inconsistent state across the NFS-ganesha cluster.
Workaround:
To restore the cluster back to normal state, perform the following -
* identify the nodes where in the volume got unexported
* re-export the volume manually using the below dbus command -
# dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/var/run/gluster/shared_storage/nfs-ganesha/exports/export.<volname>.conf string:"EXPORT(Path=/<volname>)"
<<<
Thanks Soumya. Slightly edited the doc text for the release notes. |