Bug 1398280 - Failed to disable nfs ganesha if any of the port block process are in failed state.
Summary: Failed to disable nfs ganesha if any of the port block process are in failed ...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: common-ha
Version: rhgs-3.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Kaleb KEITHLEY
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks: 1351530
TreeView+ depends on / blocked
 
Reported: 2016-11-24 11:39 UTC by Arthy Loganathan
Modified: 2019-05-20 12:40 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
If any of the PCS resources are in the failed state, then the teardown requires a lot of time to complete. Due to this, the command "gluster nfs-ganesha disable" will timeout. Workaround: If "gluster nfs-ganesha disable" errored with a timeout, then perform the pcs status and check whether any resource is in failed state. Then perform cleanup for that resource using following command pcs resource --cleanup <resource id> Re-execute the "gluster nfs-ganesha disable" command.
Clone Of:
Environment:
Last Closed: 2019-05-20 12:40:42 UTC
Embargoed:


Attachments (Terms of Use)

Description Arthy Loganathan 2016-11-24 11:39:17 UTC
Description of problem:
Unable to disable nfs ganesha if any of the port block process are in failed state.

gluster nfs-ganesha disable command gets timed out.

[root@dhcp46-115 ~]# gluster nfs-ganesha disable
Disabling NFS-Ganesha will tear down entire ganesha cluster across the trusted pool. Do you still want to continue?
 (y/n) y
This will take a few minutes to complete. Please wait ..
Error : Request timed out

Version-Release number of selected component (if applicable):
glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64
nfs-ganesha-2.4.1-1.el7rhgs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Try to disable nfs-ganesha when any of the port block process is in failed state.


Actual results:
Disabling nfs-ganesha gets time out.

Expected results:
nfs-ganesha should get disabled.

Additional info:

ganesha.log snippet:
--------------------

24/11/2016 16:52:46 : epoch 56730000 : dhcp46-115.lab.eng.blr.redhat.com : ganesha.nfsd-28262[Admin] destroy_fsals :FSAL :EVENT :Shutting down DS handles for FSAL MDCACHE
24/11/2016 16:52:46 : epoch 56730000 : dhcp46-115.lab.eng.blr.redhat.com : ganesha.nfsd-28262[Admin] destroy_fsals :FSAL :EVENT :Shutting down exports for FSAL MDCACHE
24/11/2016 16:52:46 : epoch 56730000 : dhcp46-115.lab.eng.blr.redhat.com : ganesha.nfsd-28262[Admin] destroy_fsals :FSAL :EVENT :Exports for FSAL MDCACHE shut down
24/11/2016 16:52:46 : epoch 56730000 : dhcp46-115.lab.eng.blr.redhat.com : ganesha.nfsd-28262[Admin] destroy_fsals :FSAL :EVENT :Shutting down handles for FSAL PSEUDO
24/11/2016 16:52:46 : epoch 56730000 : dhcp46-115.lab.eng.blr.redhat.com : ganesha.nfsd-28262[Admin] destroy_fsals :FSAL :EVENT :Shutting down DS handles for FSAL PSEUDO
24/11/2016 16:52:46 : epoch 56730000 : dhcp46-115.lab.eng.blr.redhat.com : ganesha.nfsd-28262[Admin] destroy_fsals :FSAL :EVENT :Shutting down exports for FSAL PSEUDO
24/11/2016 16:52:46 : epoch 56730000 : dhcp46-115.lab.eng.blr.redhat.com : ganesha.nfsd-28262[Admin] destroy_fsals :FSAL :EVENT :Exports for FSAL PSEUDO shut down
24/11/2016 16:52:46 : epoch 56730000 : dhcp46-115.lab.eng.blr.redhat.com : ganesha.nfsd-28262[Admin] destroy_fsals :FSAL :CRIT :Extra references (1) hanging around to FSAL PSEUDO
24/11/2016 16:52:46 : epoch 56730000 : dhcp46-115.lab.eng.blr.redhat.com : ganesha.nfsd-28262[Admin] do_shutdown :MAIN :EVENT :FSAL system destroyed.
24/11/2016 16:52:46 : epoch 56730000 : dhcp46-115.lab.eng.blr.redhat.com : ganesha.nfsd-28262[main] nfs_start :MAIN :EVENT :NFS EXIT: regular exit

Comment 5 Arthy Loganathan 2016-11-28 06:32:36 UTC
Soumya,

As you said in comment4, though the gluster nfs-ganesha disable cli command gets timed out, after a while the cluster has been teardown and nfs-ganesha services have been disabled. But, it takes more than ~two hours to clean them up, if any of the resources are down.

Comment 8 Bhavana 2017-03-13 06:36:04 UTC
The doc text is slightly edited further for the Release Notes.

Comment 10 Kaleb KEITHLEY 2017-08-23 12:33:36 UTC
will address in storhaug

Comment 13 Jiffin 2019-05-20 12:40:42 UTC
Not planning to fix in any upcoming releases, hence closing this as won't fix.
But will keep track of this known issue bug in admin guide.


Note You need to log in before you can comment on or make changes to this bug.