Bug 1364410 - [ganesha+gdeploy]: Validate status of HA cluster once ganesha is enabled on the cluster.
Summary: [ganesha+gdeploy]: Validate status of HA cluster once ganesha is enabled on t...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gdeploy
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.2.0
Assignee: Sachidananda Urs
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On: 1393966 1395539 1395648 1395649 1395652
Blocks: 1351528
TreeView+ depends on / blocked
 
Reported: 2016-08-05 09:58 UTC by Shashank Raj
Modified: 2017-03-23 05:07 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-23 05:07:56 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:0482 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.2.0 gdeploy bug fix and enhancement update 2017-03-23 09:06:28 UTC

Description Shashank Raj 2016-08-05 09:58:38 UTC
Description of problem:

Validate status of HA cluster once ganesha is enabled on the cluster.

Version-Release number of selected component (if applicable):

How reproducible:


Steps to Reproduce:

Currently ganesha setup creation works fine with gdeploy. But there is no way to validate that all the nodes are in proper state or not, once ganesha is enabled on the cluster.

Actual results:

Validate nfs-ganesha setup creation once ganesha is enabled on the cluster.

Expected results:

Validation can be done using below script, which we use to check and verify HA status:

/usr/libexec/ganesha/ganesha-ha.sh --status

Additional info:

Comment 2 Sachidananda Urs 2016-08-11 12:24:23 UTC
Is the expectation to print the status at the end of setup. Or is the expectation that user has to know on demand? Like, long after the setup is done?

[shell] section can be written at the end of the configuration file to figure out the status.

If user expects to have the status printed at the end of the installation, then I'll update the nfs-ganesha module.

Comment 3 Shashank Raj 2016-08-11 14:51:48 UTC
The status should be checked right after we perform gluster nfs-ganesha enable and should be included in nfs-ganesha setup module

Something like below will be good to have:

2016-08-11 16:41:29,627 INFO run root.eng.blr.redhat.com: /usr/libexec/ganesha/ganesha-ha.sh --status | grep -v 'Online' | cut -d ' ' -f 1 | sed s/'-cluster_ip-1'//g | sed s/'-trigger_ip-1'//g
2016-08-11 16:41:46,069 INFO run RETCODE: 0
2016-08-11 16:41:46,070 INFO run STDOUT:

dhcp43-133.lab.eng.blr.redhat.com
dhcp41-206.lab.eng.blr.redhat.com

2016-08-11 16:41:46,070 INFO run root.eng.blr.redhat.com: /usr/libexec/ganesha/ganesha-ha.sh --status | grep -v 'Online' | cut -d ' ' -f 2
2016-08-11 16:42:02,560 INFO run RETCODE: 0
2016-08-11 16:42:02,560 INFO run STDOUT:

dhcp43-133.lab.eng.blr.redhat.com
dhcp41-206.lab.eng.blr.redhat.com

2016-08-11 16:42:02,560 INFO ganesha_ha_status ganesha ha status is correct

Comment 4 Sachidananda Urs 2016-09-20 07:12:59 UTC
Rebase fixes the issue.

Comment 9 Rahul Hinduja 2016-11-16 09:47:23 UTC
Verification works if the nfs-ganesha setup is Proper. However when any node in the cluster goes down the status is still inappropriate. Bug 1393966 addresses the issue and hence marking this bug dependent on the other for its complete verification.

Comment 10 Manisha Saini 2017-01-13 09:56:52 UTC
Verified this Bug on 

gdeploy-2.0.1-8.el7rhgs.noarch
glusterfs-ganesha-3.8.4-11.el7rhgs.x86_64

At the end of cluster creation,Gdeploy now validate the status of nfs-ganesha HA cluster.
Hence marking this bug as verified

Comment 12 errata-xmlrpc 2017-03-23 05:07:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2017-0482.html


Note You need to log in before you can comment on or make changes to this bug.