Bug 1364410
| Summary: | [ganesha+gdeploy]: Validate status of HA cluster once ganesha is enabled on the cluster. | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Shashank Raj <sraj> |
| Component: | gdeploy | Assignee: | Sachidananda Urs <surs> |
| Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | rhgs-3.1 | CC: | jthottan, kkeithle, mzywusko, ndevos, rcyriac, rhinduja, skoduri, smohan, storage-qa-internal |
| Target Milestone: | --- | ||
| Target Release: | RHGS 3.2.0 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-03-23 05:07:56 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1393966, 1395539, 1395648, 1395649, 1395652 | ||
| Bug Blocks: | 1351528 | ||
|
Description
Shashank Raj
2016-08-05 09:58:38 UTC
Is the expectation to print the status at the end of setup. Or is the expectation that user has to know on demand? Like, long after the setup is done? [shell] section can be written at the end of the configuration file to figure out the status. If user expects to have the status printed at the end of the installation, then I'll update the nfs-ganesha module. The status should be checked right after we perform gluster nfs-ganesha enable and should be included in nfs-ganesha setup module Something like below will be good to have: 2016-08-11 16:41:29,627 INFO run root.eng.blr.redhat.com: /usr/libexec/ganesha/ganesha-ha.sh --status | grep -v 'Online' | cut -d ' ' -f 1 | sed s/'-cluster_ip-1'//g | sed s/'-trigger_ip-1'//g 2016-08-11 16:41:46,069 INFO run RETCODE: 0 2016-08-11 16:41:46,070 INFO run STDOUT: dhcp43-133.lab.eng.blr.redhat.com dhcp41-206.lab.eng.blr.redhat.com 2016-08-11 16:41:46,070 INFO run root.eng.blr.redhat.com: /usr/libexec/ganesha/ganesha-ha.sh --status | grep -v 'Online' | cut -d ' ' -f 2 2016-08-11 16:42:02,560 INFO run RETCODE: 0 2016-08-11 16:42:02,560 INFO run STDOUT: dhcp43-133.lab.eng.blr.redhat.com dhcp41-206.lab.eng.blr.redhat.com 2016-08-11 16:42:02,560 INFO ganesha_ha_status ganesha ha status is correct Rebase fixes the issue. Verification works if the nfs-ganesha setup is Proper. However when any node in the cluster goes down the status is still inappropriate. Bug 1393966 addresses the issue and hence marking this bug dependent on the other for its complete verification. Verified this Bug on gdeploy-2.0.1-8.el7rhgs.noarch glusterfs-ganesha-3.8.4-11.el7rhgs.x86_64 At the end of cluster creation,Gdeploy now validate the status of nfs-ganesha HA cluster. Hence marking this bug as verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2017-0482.html |