Bug 1399122 - Ganesha: Need to add cleanup steps for ganesha cluster if it goes to an inconsistent state while doing the setup.
Summary: Ganesha: Need to add cleanup steps for ganesha cluster if it goes to an incon...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: doc-Administration_Guide
Version: rhgs-3.2
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHGS 3.2.0
Assignee: Bhavana
QA Contact: surabhi
URL:
Whiteboard:
Depends On:
Blocks: 1351553
TreeView+ depends on / blocked
 
Reported: 2016-11-28 10:18 UTC by surabhi
Modified: 2017-03-24 10:23 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-24 10:23:32 UTC
Embargoed:


Attachments (Terms of Use)

Description surabhi 2016-11-28 10:18:25 UTC
Document URL: 

https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/sect-NFS.html

Section Number and Name: 
Either a new section needs to be created in NFS heading of doc for the cleanup of ganesha cluster or a Kbase article could be created and a link added to the nfs section in admin guide.


Describe the issue: 
If for some reason ganesha cluster setup doesn't succeed either manually or using gdeploy and it leads to an inconsistent state, there are no instructions to cleanup the cluster from that state and bring back to normal state.


Suggestions for improvement: 

Need to add steps for cleaning up ganesha cluster (which is in inconsistent state )

Additional information: 

This is referred only when while doing the setup it goes to inconsistent state.

Comment 3 Bhavana 2017-02-02 06:21:43 UTC
Hi Surabhi,

Just checking if this bug is similar/same issue to the following bug:

https://bugzilla.redhat.com/show_bug.cgi?id=1399148

if its not, then pls do share the steps/kbase article so that I can add them to the admin guide.

Thnaks

Comment 4 surabhi 2017-02-10 08:22:05 UTC
This is not related to BZ https://bugzilla.redhat.com/show_bug.cgi?id=1399148

This is to be added to troubleshooting section if there is a problem setting up ganesha cluster and leads to an inconsistent state we need to add the steps to clean up and do the setup again.

The steps for cleanup is needed.

Comment 5 Bhavana 2017-02-15 12:41:02 UTC
Hi Soumya,

As discussed, can you please share the cleanup steps.

Comment 6 Soumya Koduri 2017-02-20 02:34:30 UTC
The cleanup steps goes as follows -

In case if the nfs-ganesha HA cluster setup fails, to restore back the machines to original state, please do the following on every node forming the cluster -
* /use/libexec/ganesha.sh --teardown /var/run/gluster/shared_storage/nfs-ganesha
* /use/libexec/ganesha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha
* systemctl stop nfs-ganesha

Request Kaleb and Jiffin to review and comment on any additional steps needed.

Comment 7 Jiffin 2017-02-20 08:57:38 UTC
I am not sure whether it is worth mention about correct glusterd option "nfs-ganesha" in "/var/lib/glusterd/options". So that we can make the state of volume to be same in the cluster.
Secondly may be we also need to remove the contents inside shared_storage ("/var/run/gluster/shared_storage/nfs-ganesha")

Comment 8 Kaleb KEITHLEY 2017-02-20 19:46:07 UTC
Soumya's steps in Comment 6 are correct. I have nothing to add.

Comment 9 Bhavana 2017-02-21 09:51:14 UTC
The steps provided in comment 6 is added as one of the Troubleshooting scenarios in the NFS Ganesha section as:

"Cleanup required when nfs-ganesha HA cluster setup fails"

http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.2-Administration_Guide-branch-master/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html#sect-NFS_Ganesha

Comment 10 surabhi 2017-03-10 06:50:18 UTC
The steps for cleanup looks good.

Marking the BZ verified.

Comment 11 Rejy M Cyriac 2017-03-24 10:23:32 UTC
RHGS 3.2.0 GA completed on 23 March 2017


Note You need to log in before you can comment on or make changes to this bug.