Bug 1394138

Summary: Shared storage mount point should be unmounted as part of peer detach
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Manisha Saini <msaini>
Component: glusterdAssignee: Sheetal Pamecha <spamecha>
Status: CLOSED WONTFIX QA Contact: Bala Konda Reddy M <bmekala>
Severity: low Docs Contact:
Priority: low    
Version: rhgs-3.2CC: amukherj, asriram, pasik, rcyriac, rhs-bugs, sheggodu, skoduri, spamecha, storage-doc, storage-qa-internal, sunkumar, vbellur
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: shared-storage
Fixed In Version: Doc Type: Known Issue
Doc Text:
If a node is deleted from the NFS-Ganesha HA cluster without performing umount, and then a peer detach of that node is performed, that volume is still accessible in /var/run/gluster/shared_storage/ location even after removing the node in the HA-Cluster. Workaround: After a peer is detached from the cluster, manually unmount the shared storage on that peer.
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-10-14 05:21:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1351530    

Description Manisha Saini 2016-11-11 07:40:26 UTC
Document URL: 

Section Number and Name: 

Describe the issue: 

When a node is to be deleted from an HA cluster in ganesha and from gluster ,then first umount needs to be performed.

If umount is not performed and a node is to be deleted from HA Cluster ganesha and then doing peer detach of that node, that volume is still accessible in /var/run/gluster/shared_storage/ location
even after removing node in HA-Cluster

/usr/libexec/ganesha/ganesha-ha.sh --delete /var/run/gluster/shared_storage/nfs-ganesha/ dhcp37-169.lab.eng.blr.redhat.com

gluster peer detach dhcp37-169.lab.eng.blr.redhat.com


  
Suggestions for improvement: 

Additional information:

Comment 5 Bhavana 2017-02-03 09:27:51 UTC
Hi Anjana,

The document will be updated with the mentioned chnage for the 3.2 release.

Assigning it to myself.

Comment 6 Bhavana 2017-02-07 09:55:24 UTC
Based on the discussion with the engineering (Soumya, Avra, Surabhi, Manisha), it was decided to mark this as a bug fix for any upcoming release. And, wrt doc this has to be changed to "Known Issue" for 3.2.

Comment 7 Soumya Koduri 2017-02-07 10:09:46 UTC
From the discussions with QE, looks like when a new node is added to the gluster cluster, shared storage volume gets auto-mounted but in case of peer detach, it doesn't get unmounted. We planned to document this as known_issue until it gets fixed.

Assuming this needs to be taken care as part of hook-scripts of gluster peer detach CLI, changing the component to glusterd. Kindly re-assign/correct the components if that is not the case.

Comment 11 Avra Sengupta 2017-03-14 05:38:58 UTC
As fas as shared storage is concerned, the doc text looks good to me. I would like Soumya to have a look at the NFS-Ganesha bits of it.

Comment 12 Soumya Koduri 2017-03-14 06:10:06 UTC
The doc text looks good, but since this issue is not just specific to NFS-Ganesha, have made few corrections. Please check the same-

If a node is detached from the Gluster storage pool with cluster.shared_storage enabled, the shared volume shall still be accessible in /var/run/gluster/shared_storage/ location even after removing the node.

Workaround: Before a peer is detached from the cluster, manually unmount the shared storage on that peer.

Comment 13 Avra Sengupta 2017-03-14 06:49:17 UTC
This looks much better and simple.

Comment 17 Atin Mukherjee 2018-10-06 15:56:35 UTC
Sunny - What's the plan on this BZ? Are we planning to address this in any of the upcoming release?

Comment 18 Atin Mukherjee 2018-10-31 04:29:09 UTC
Sunny - Did you get a chance to estimate the size of the fix?

Comment 19 Sunny Kumar 2018-11-27 09:40:47 UTC
Atin,

This is kind of improvement and will require 10-15 days work. I am planning to fix it in future release (after 2-3 months).

Comment 21 Atin Mukherjee 2019-02-11 03:40:55 UTC
Where are we w.r.t the upstream fix?