Bug 1305025 - [Doc RFE] Need to add upgrade steps for Nfs-Ganesha in the installation guide
[Doc RFE] Need to add upgrade steps for Nfs-Ganesha in the installation guide
Status: VERIFIED
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: doc-Installation_Guide (Show other bugs)
3.1
x86_64 Linux
unspecified Severity high
: ---
: RHGS 3.1.2
Assigned To: Bhavana
storage-qa-internal@redhat.com
: Documentation, FutureFeature, ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-05 06:14 EST by Apeksha
Modified: 2016-11-23 18:14 EST (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Cause: Consequence: Fix: Result:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Apeksha 2016-02-05 06:14:10 EST
Document URL: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Installation_Guide/index.html

Section Number and Name: We need to add a section for upgradinbg Nfs-ganesha from 3.1.0 to 3.1.x 

Describe the issue: 

Suggestions for improvement: 

Additional information:
Comment 2 Soumya Koduri 2016-02-17 02:38:39 EST
The upgrade steps for a cluster using pacemaker/corosync are clearly documented in the below link-

"Recommended Practices for Applying Software Updates to a RHEL High Availability or Resilient Storage Cluster" https://access.redhat.com/articles/2059253
Comment 3 Bhavana 2016-02-18 03:57:59 EST
Email conversation : Apeksha's reply
--------------------------------------------------------

Hi Bhavana,

The article is about upgrading pcs/pacemaker. We need steps to upgrade glusterfs-ganesha/nfs-ganesha by stopping nfs-ganesha and pcs/pacemaker services. Also the article is pretty verbose, we dont need such detailed explanation and require just the steps as we have it for gluster.

Regards,
Apeksha

--------------------------------------------------------------
Comment 5 Shashank Raj 2016-02-21 22:20:32 EST
Below are the steps which needs to be documented for the cluster wide upgrade of nfs-ganesha:

Upgrade from 3.1/3.1.1 to 3.1.2:

1) Stop the nfs-ganesha service on all the nodes of the cluster:

    service nfs-ganesha stop

    Confirm by running the command “pcs status: on all the nodes that it is stopped.

2) Stop the glusterd service and kill any running gluster process on all the nodes:

    service glusterd stop

    pkill glusterfs

    pkill glusterfsd

3) Place the entire cluster in standby mode on all the nodes:

    pcs cluster standby <node-name>

4) Stop the cluster software on all the node using pcs (make sure it in turn stops pacemaker and cman):

    pcs cluster stop <node-name>

5) Perform the necessary nfs-ganesha packages update on all the nodes:

    yum update nfs-ganesha 

NOTE: i) This will install glusterfs-ganesha and nfs-ganesha-gluster package along with other dependent   gluster packages.
ii) Some warnings might appear during upgrade related to shared_storage but it can be ignored. 

6) Verify on all the nodes the required packages are updated and ensure the nodes seems to be fully functional and are using the correct versions. If anything does not seem correct, then do not proceed until the situation is resolved. Contact Red Hat Global Support Services for assistance if needed.

7) Once everything appears to be set up correctly, start the cluster software on all the nodes.

    pcs cluster start <node-name>

8) Check pcs status output to determine if everything appears as it should. Once the nodes seems to be functioning properly, reactivate it for service by taking it out of standby mode:

    pcs cluster unstandby <node-name>

9) Make sure there are no failures and unexpected results.

10) Start glusterd service on all the nodes:

       service glusterd start

11) Mount the shared storage volume created before upgrade on all the nodes:

       mount -t glusterfs localhost:/gluster_shared_storage /var/run/gluster/shared_storage

12) Check on the nodes whether glusterfs-nfs is running or not after the upgrade:

        ps -aux|grep nfs

13) Disable glusterfs-nfs running (if, on any node):

       gluster volume set <volname> nfs.disable on

14) Start the nfs-ganesha service on all the nodes:

       service nfs-ganesha start

15) Verify that all the nodes seems to be fully functional. If anything does not seem correct, then do not proceed until the situation is resolved. Contact Red Hat Global Support Services for assistance if needed.
Comment 8 Shashank Raj 2016-02-23 08:24:54 EST
Below are the similar outputs which we will get while following upgrade process:


3) Place the entire cluster in standby mode on all the nodes:

    pcs cluster standby <node-name>

Example:
[root@nfs1 ~]# pcs cluster standby nfs1
[root@nfs1 ~]# pcs status
Cluster name: G1455878027.97
Last updated: Tue Feb 23 08:05:13 2016
Last change: Tue Feb 23 08:04:55 2016
Stack: cman
Current DC: nfs1 - partition with quorum
Version: 1.1.11-97629de
4 Nodes configured
16 Resources configured


Node nfs1: standby
Online: [ nfs2 nfs3 nfs4 ]

Full list of resources:

 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ nfs2 nfs3 nfs4 ]
     Stopped: [ nfs1 ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Started: [ nfs2 nfs3 nfs4 ]
     Stopped: [ nfs1 ]
 nfs1-cluster_ip-1      (ocf::heartbeat:IPaddr):        Started nfs4 
 nfs1-trigger_ip-1      (ocf::heartbeat:Dummy): Started nfs4 
 nfs2-cluster_ip-1      (ocf::heartbeat:IPaddr):        Started nfs2 
 nfs2-trigger_ip-1      (ocf::heartbeat:Dummy): Started nfs2 
 nfs3-cluster_ip-1      (ocf::heartbeat:IPaddr):        Started nfs3 
 nfs3-trigger_ip-1      (ocf::heartbeat:Dummy): Started nfs3 
 nfs4-cluster_ip-1      (ocf::heartbeat:IPaddr):        Started nfs4 
 nfs4-trigger_ip-1      (ocf::heartbeat:Dummy): Started nfs4

4) Stop the cluster software on all the node using pcs (make sure it in turn stops pacemaker and cman):

    pcs cluster stop <node-name>

Example:

[root@nfs1 ~]# pcs cluster stop nfs1
nfs1: Stopping Cluster (pacemaker)...
nfs1: Stopping Cluster (cman)...

[root@nfs1 ~]# pcs status
Error: cluster is not currently running on this node


7) Once everything appears to be set up correctly, start the cluster software on all the nodes.

    pcs cluster start <node-name>

Example:

[root@nfs1 ~]# pcs cluster start nfs1
nfs1: Starting Cluster...

[root@nfs1 ~]# pcs status
Cluster name: G1455878027.97
Last updated: Tue Feb 23 08:12:29 2016
Last change: Tue Feb 23 08:04:55 2016
Stack: cman
Current DC: nfs3 - partition with quorum
Version: 1.1.11-97629de
4 Nodes configured
16 Resources configured


Node nfs1: standby
Online: [ nfs2 nfs3 nfs4 ]

Full list of resources:

 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ nfs2 nfs3 nfs4 ]
     Stopped: [ nfs1 ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Started: [ nfs2 nfs3 nfs4 ]
     Stopped: [ nfs1 ]
 nfs1-cluster_ip-1      (ocf::heartbeat:IPaddr):        Started nfs4 
 nfs1-trigger_ip-1      (ocf::heartbeat:Dummy): Started nfs4 
 nfs2-cluster_ip-1      (ocf::heartbeat:IPaddr):        Started nfs2 
 nfs2-trigger_ip-1      (ocf::heartbeat:Dummy): Started nfs2 
 nfs3-cluster_ip-1      (ocf::heartbeat:IPaddr):        Started nfs3 
 nfs3-trigger_ip-1      (ocf::heartbeat:Dummy): Started nfs3 
 nfs4-cluster_ip-1      (ocf::heartbeat:IPaddr):        Started nfs4 
 nfs4-trigger_ip-1      (ocf::heartbeat:Dummy): Started nfs4 


8) Check pcs status output to determine if everything appears as it should. Once the nodes seems to be functioning properly, reactivate it for service by taking it out of standby mode:

    pcs cluster unstandby <node-name>

Example:

[root@nfs1 ~]# pcs cluster unstandby nfs1
[root@nfs1 ~]# pcs status
Cluster name: G1455878027.97
Last updated: Tue Feb 23 08:14:01 2016
Last change: Tue Feb 23 08:13:57 2016
Stack: cman
Current DC: nfs3 - partition with quorum
Version: 1.1.11-97629de
4 Nodes configured
16 Resources configured


Online: [ nfs1 nfs2 nfs3 nfs4 ]

Full list of resources:

 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ nfs1 nfs2 nfs3 nfs4 ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Started: [ nfs1 nfs2 nfs3 nfs4 ]
 nfs1-cluster_ip-1      (ocf::heartbeat:IPaddr):        Started nfs1 
 nfs1-trigger_ip-1      (ocf::heartbeat:Dummy): Started nfs1 
 nfs2-cluster_ip-1      (ocf::heartbeat:IPaddr):        Started nfs2 
 nfs2-trigger_ip-1      (ocf::heartbeat:Dummy): Started nfs2 
 nfs3-cluster_ip-1      (ocf::heartbeat:IPaddr):        Started nfs3 
 nfs3-trigger_ip-1      (ocf::heartbeat:Dummy): Started nfs3 
 nfs4-cluster_ip-1      (ocf::heartbeat:IPaddr):        Started nfs4 
 nfs4-trigger_ip-1      (ocf::heartbeat:Dummy): Started nfs4
Comment 9 Bhavana 2016-02-24 01:58:55 EST
A new section 8.1.1. Updating NFS-Ganesha in the Offline Mode is created under the offline update chapter where the steps mentioned are added.

A link to this section is also added at the beginning of the Offline update chapter.

http://jenkinscat.gsslab.pnq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.1-Installation_Guide%20%28html-single%29/lastStableBuild/artifact/tmp/en-US/html-single/index.html#NFS_Ganesha_Update
Comment 11 Shashank Raj 2016-02-24 02:22:10 EST
Bhavana,

The above section has been added as a part of 3.0.x to 3.1.x upgrade process but since nfs-ganesha was not part of 3.0.x, Can we add it as a sepearate section refering to offline upgrade from 3.1 to 3.1.x marking it as 8.2 instead of 8.1.1

Also, in current section 8.2, we need to add below information as a note:

"NFS Ganesha does not support the in service upgrade, so all the running services and IO's has to be stopped before starting the upgrade process."
Comment 12 Bhavana 2016-02-24 03:11:55 EST
The section is changed to 8.2 as suggested and the additional comment of adding a note is incorporated too.

http://jenkinscat.gsslab.pnq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.1-Installation_Guide%20%28html-single%29/lastStableBuild/artifact/tmp/en-US/html-single/index.html#NFS_Ganesha_Update
Comment 13 Shashank Raj 2016-02-24 03:39:41 EST
Verified and the document looks good.

Marking this bug as verified.

Note You need to log in before you can comment on or make changes to this bug.