Bug 1398280
| Summary: | Failed to disable nfs ganesha if any of the port block process are in failed state. | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Arthy Loganathan <aloganat> |
| Component: | common-ha | Assignee: | Kaleb KEITHLEY <kkeithle> |
| Status: | CLOSED WONTFIX | QA Contact: | Manisha Saini <msaini> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | rhgs-3.2 | CC: | amukherj, arjsharm, bmohanra, jthottan, pasik, rhs-bugs, skoduri, storage-qa-internal |
| Target Milestone: | --- | Keywords: | Triaged, ZStream |
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Known Issue | |
| Doc Text: |
If any of the PCS resources are in the failed state, then the teardown requires a lot of time to complete. Due to this, the command "gluster nfs-ganesha disable" will timeout.
Workaround: If "gluster nfs-ganesha disable" errored with a timeout, then perform the pcs status and check whether any resource is in failed state. Then perform cleanup for that resource using following command
pcs resource --cleanup <resource id>
Re-execute the "gluster nfs-ganesha disable" command.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-05-20 12:40:42 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1351530 | ||
|
Description
Arthy Loganathan
2016-11-24 11:39:17 UTC
Soumya, As you said in comment4, though the gluster nfs-ganesha disable cli command gets timed out, after a while the cluster has been teardown and nfs-ganesha services have been disabled. But, it takes more than ~two hours to clean them up, if any of the resources are down. The doc text is slightly edited further for the Release Notes. will address in storhaug Not planning to fix in any upcoming releases, hence closing this as won't fix. But will keep track of this known issue bug in admin guide. |