Bug 1235971 - nfs-ganesha: ganesha-ha.sh --status is actually same as "pcs status"
Summary: nfs-ganesha: ganesha-ha.sh --status is actually same as "pcs status"
Keywords:
Status: CLOSED ERRATA
Alias: None
Deadline: 2015-08-28
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: nfs-ganesha
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: RHGS 3.1.1
Assignee: Kaleb KEITHLEY
QA Contact: Saurabh
URL:
Whiteboard:
Depends On:
Blocks: 1250628 1251815 1256227
TreeView+ depends on / blocked
 
Reported: 2015-06-26 08:56 UTC by Saurabh
Modified: 2016-01-19 06:15 UTC (History)
12 users (show)

Fixed In Version: glusterfs-3.7.1-13
Doc Type: Bug Fix
Doc Text:
Previously, the ganesha-ha.sh --status command printed the output of "pcs status" as is on the screen. The output was not user friendly. With this fix, the output of the pcs status is formatted well and is easily understandable by the user.
Clone Of:
: 1250628 (view as bug list)
Environment:
Last Closed: 2015-10-05 07:15:09 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1845 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.1 update 2015-10-05 11:06:22 UTC

Description Saurabh 2015-06-26 08:56:12 UTC
Description of problem:
Presently with ganesha-ha.sh --status is executed to calculate the information about the nfs-ganesha cluster. The output is same as "pcs status" command.

We should try to modify in a sense that makes it more easily understandable.

Version-Release number of selected component (if applicable):
glusterfs-3.7.1-5.el6rhs.x86_64
nfs-ganesha-2.2.0-3.el6rhs.x86_64

How reproducible:
always


Actual results:
[root@nfs11 ~]# time /usr/libexec/ganesha/ganesha-ha.sh --status
grep: /ganesha-ha.conf: No such file or directory
grep: /ganesha-ha.conf: No such file or directory
grep: /ganesha-ha.conf: No such file or directory
Cluster name: nozomer
Last updated: Fri Jun 26 14:12:32 2015
Last change: Thu Jun 25 17:03:58 2015
Stack: cman
Current DC: nfs12 - partition with quorum
Version: 1.1.11-97629de
4 Nodes configured
16 Resources configured


Online: [ nfs11 nfs12 nfs13 nfs14 ]

Full list of resources:

 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ nfs11 nfs12 nfs13 nfs14 ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Started: [ nfs11 nfs12 nfs13 nfs14 ]
 nfs11-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs11 
 nfs11-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs11 
 nfs12-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs12 
 nfs12-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs12 
 nfs13-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs13 
 nfs13-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs13 
 nfs14-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs14 
 nfs14-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs14 


Expected results:
We should try take out the following information from status, such as 
1. "Last updated: Fri Jun 26 14:12:32 2015
Last change: Thu Jun 25 17:03:58 2015
Stack: cman
Current DC: nfs12 - partition with quorum
Version: 1.1.11-97629de"

2. "16 Resources configured"

3. ocf heartbeat related information.

We should try to modify the output so that the information we display only provides status of the nfs-ganesha cluster.

Additional info:

Comment 3 Kaleb KEITHLEY 2015-08-05 15:31:45 UTC
What should be displayed?

Comment 6 Saurabh 2015-08-14 09:14:47 UTC
(In reply to Kaleb KEITHLEY from comment #3)
> What should be displayed?

As per the thoughts that I have I am putting it forward,

1. Lets remove the extra information such as the below mentioned lines,
   "Cluster name: nozomer
Last updated: Fri Jun 26 14:12:32 2015
Last change: Thu Jun 25 17:03:58 2015
Stack: cman
Current DC: nfs12 - partition with quorum
Version: 1.1.11-97629de"

2. It should inform about the online nodes, that nfs-ganesha running, such as,

   Online: [ nfs11 nfs12 nfs13 nfs14 ]

3. Also, it should mention the status of the present running status that should have the information about the service failover to other node, such as,
   "     Started: [ nfs11 nfs12 nfs13 nfs14 ]
 nfs11-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs11 
 nfs11-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs11 
 nfs12-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs12 
 nfs12-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs12 
 nfs13-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs13 
 nfs13-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs13 
 nfs14-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs14 
 nfs14-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs14 "

   But from these lines we need to remove pcs based info "(ocf::heartbeat:IPaddr) and (ocf::heartbeat:Dummy)", or instead we can write down something more easily readable statement.

  Hope, this sets the tone for change, if more inputs are required then please let me know.

Comment 8 Saurabh 2015-08-28 07:09:39 UTC
Moving this bz to verified,
[root@nfs11 ~]# time bash /usr/libexec/ganesha/ganesha-ha.sh --status
Online: [ nfs11.lab.eng.blr.redhat.com nfs12.lab.eng.blr.redhat.com nfs13.lab.eng.blr.redhat.com ]

nfs11.lab.eng.blr.redhat.com-cluster_ip-1 nfs13.lab.eng.blr.redhat.com
nfs11.lab.eng.blr.redhat.com-trigger_ip-1 nfs13.lab.eng.blr.redhat.com
nfs12.lab.eng.blr.redhat.com-cluster_ip-1 nfs12.lab.eng.blr.redhat.com
nfs12.lab.eng.blr.redhat.com-trigger_ip-1 nfs12.lab.eng.blr.redhat.com
nfs13.lab.eng.blr.redhat.com-cluster_ip-1 nfs13.lab.eng.blr.redhat.com
nfs13.lab.eng.blr.redhat.com-trigger_ip-1 nfs13.lab.eng.blr.redhat.com

Comment 10 errata-xmlrpc 2015-10-05 07:15:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1845.html


Note You need to log in before you can comment on or make changes to this bug.