Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 1394138 - Shared storage mount point should be unmounted as part of peer detach [NEEDINFO]
Shared storage mount point should be unmounted as part of peer detach
Status: ASSIGNED
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
3.2
Unspecified Unspecified
low Severity low
: ---
: ---
Assigned To: Sunny Kumar
Bala Konda Reddy M
shared-storage
: ZStream
Depends On:
Blocks: 1351530
  Show dependency treegraph
 
Reported: 2016-11-11 02:40 EST by Manisha Saini
Modified: 2018-10-31 00:29 EDT (History)
12 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
If a node is deleted from the NFS-Ganesha HA cluster without performing umount, and then a peer detach of that node is performed, that volume is still accessible in /var/run/gluster/shared_storage/ location even after removing the node in the HA-Cluster. Workaround: After a peer is detached from the cluster, manually unmount the shared storage on that peer.
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
amukherj: needinfo? (sunkumar)
amukherj: needinfo? (sunkumar)


Attachments (Terms of Use)

  None (edit)
Description Manisha Saini 2016-11-11 02:40:26 EST
Document URL: 

Section Number and Name: 

Describe the issue: 

When a node is to be deleted from an HA cluster in ganesha and from gluster ,then first umount needs to be performed.

If umount is not performed and a node is to be deleted from HA Cluster ganesha and then doing peer detach of that node, that volume is still accessible in /var/run/gluster/shared_storage/ location
even after removing node in HA-Cluster

/usr/libexec/ganesha/ganesha-ha.sh --delete /var/run/gluster/shared_storage/nfs-ganesha/ dhcp37-169.lab.eng.blr.redhat.com

gluster peer detach dhcp37-169.lab.eng.blr.redhat.com


  
Suggestions for improvement: 

Additional information:
Comment 5 Bhavana 2017-02-03 04:27:51 EST
Hi Anjana,

The document will be updated with the mentioned chnage for the 3.2 release.

Assigning it to myself.
Comment 6 Bhavana 2017-02-07 04:55:24 EST
Based on the discussion with the engineering (Soumya, Avra, Surabhi, Manisha), it was decided to mark this as a bug fix for any upcoming release. And, wrt doc this has to be changed to "Known Issue" for 3.2.
Comment 7 Soumya Koduri 2017-02-07 05:09:46 EST
From the discussions with QE, looks like when a new node is added to the gluster cluster, shared storage volume gets auto-mounted but in case of peer detach, it doesn't get unmounted. We planned to document this as known_issue until it gets fixed.

Assuming this needs to be taken care as part of hook-scripts of gluster peer detach CLI, changing the component to glusterd. Kindly re-assign/correct the components if that is not the case.
Comment 11 Avra Sengupta 2017-03-14 01:38:58 EDT
As fas as shared storage is concerned, the doc text looks good to me. I would like Soumya to have a look at the NFS-Ganesha bits of it.
Comment 12 Soumya Koduri 2017-03-14 02:10:06 EDT
The doc text looks good, but since this issue is not just specific to NFS-Ganesha, have made few corrections. Please check the same-

If a node is detached from the Gluster storage pool with cluster.shared_storage enabled, the shared volume shall still be accessible in /var/run/gluster/shared_storage/ location even after removing the node.

Workaround: Before a peer is detached from the cluster, manually unmount the shared storage on that peer.
Comment 13 Avra Sengupta 2017-03-14 02:49:17 EDT
This looks much better and simple.
Comment 17 Atin Mukherjee 2018-10-06 11:56:35 EDT
Sunny - What's the plan on this BZ? Are we planning to address this in any of the upcoming release?
Comment 18 Atin Mukherjee 2018-10-31 00:29:09 EDT
Sunny - Did you get a chance to estimate the size of the fix?

Note You need to log in before you can comment on or make changes to this bug.