Bug 1347196 - [nfs-ganesha]: Update the upgrade section of nfs-ganesha with 3.1.3 changes.
Summary: [nfs-ganesha]: Update the upgrade section of nfs-ganesha with 3.1.3 changes.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: doc-Installation_Guide
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.1.3
Assignee: Laura Bailey
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On: 1347286
Blocks: 1311847
TreeView+ depends on / blocked
 
Reported: 2016-06-16 09:38 UTC by Shashank Raj
Modified: 2016-11-08 03:53 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-29 14:23:10 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Shashank Raj 2016-06-16 09:38:12 UTC
Document URL: 

http://jenkinscat.gsslab.pnq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.1-Installation_Guide%20%28html-single%29/lastStableBuild/artifact/tmp/en-US/html-single/index.html#NFS_Ganesha_Update

Section Number and Name: 

8.2. Updating NFS-Ganesha in the Offline Mode

Describe the issue: 

Based on the discussions between dev and qe, following should be the new section for nfs-ganesha upgrade:

8.2. Updating NFS-Ganesha in the Offline Mode

Execute the following steps to update the NFS-Ganesha service from Red Hat Gluster Storage 3.1 to Red Hat Gluster Storage 3.1.1 or later: 

Note

NFS-Ganesha does not support in-service update, hence all the running services and IO's have to be stopped before starting the update process. 

1. Stop the nfs-ganesha service on all the nodes of the cluster by executing the following command:

# service nfs-ganesha stop

2.Verify the status by executing the following command on all the nodes: 

# pcs status

3. Stop the glusterd service and kill any running gluster process on all the nodes:

# service glusterd stop
# pkill glusterfs
# pkill glusterfsd

4. Place the entire cluster in standby mode on all the nodes by executing the following command:

# pcs cluster standby <node-name>

For example: 
# pcs cluster standby nfs1
# pcs status

Cluster name: G1455878027.97
Last updated: Tue Feb 23 08:05:13 2016
Last change: Tue Feb 23 08:04:55 2016
Stack: cman
Current DC: nfs1 - partition with quorum
Version: 1.1.11-97629de
4 Nodes configured
16 Resources configured


Node nfs1: standby
Online: [ nfs2 nfs3 nfs4 ]

....
5. Stop the cluster software on all the nodes using pcs, by executing the following command:

# pcs cluster stop <node-name>

Ensure that it stops pacemaker and cman. 
For example: 
# pcs cluster stop nfs1
nfs1: Stopping Cluster (pacemaker)...
nfs1: Stopping Cluster (cman)...

6. Update the NFS-Ganesha packages on all the nodes by executing the following command:

# yum update nfs-ganesha
# yum update glusterfs-ganesha

Note

This will install glusterfs-ganesha and nfs-ganesha-gluster package along with other dependent gluster packages. 
Some warnings might appear during the upgrade related to shared_storage and which can be ignored. 
Verify on all the nodes that the required packages are updated, the nodes are fully functional and are using the correct versions. If anything does not seem correct, then do not proceed until the situation is resolved. Contact the Red Hat Global Support Services for assistance if needed. 

7. a) Copy the export entries of all the volumes from the old ganesha.conf file to the newly created ganesha.conf.rpmnew file after the upgrade under /etc/ganesha/.
export entries will look like:
%include "/etc/ganesha/exports/export.vol1.conf"
b) Remove the old ganesha.conf file and rename the new ganesha.conf.rpmsave to ganesha.conf

8. Change the firewall settings (if required) for the new services and ports as mentioned under important section of 7.2.4.NFS-Ganesha in 3.1.3 Administration guide

9. Start glusterd service on all the nodes by executing the following command: 
# service glusterd start

10. Mount the shared storage volume created before update on all the nodes: 
# mount -t glusterfs localhost:/gluster_shared_storage /var/run/gluster/shared_storage

11.  Start the nfs-ganesha service on all the nodes by executing the following command:
# service nfs-ganesha start

12. Start the cluster software on all the nodes by executing the following command:
# pcs cluster start <node-name>

For example: 
# pcs cluster start nfs1
nfs1: Starting Cluster...
13. Check the pcs status output to determine if everything appears as it should. Once the nodes are functioning properly, reactivate it for service by taking it out of standby mode by executing the following command:
# pcs cluster unstandby <node-name>

For example: 
# pcs cluster unstandby nfs1

# pcs status
Cluster name: G1455878027.97
Last updated: Tue Feb 23 08:14:01 2016
Last change: Tue Feb 23 08:13:57 2016
Stack: cman
Current DC: nfs3 - partition with quorum
Version: 1.1.11-97629de
4 Nodes configured
16 Resources configured


Online: [ nfs1 nfs2 nfs3 nfs4 ]

.…
Make sure there are no failures and unexpected results. 


Suggestions for improvement: 

Additional information:

Comment 2 Shashank Raj 2016-06-16 13:25:06 UTC
We are going to have new upgrade steps for nfs-ganesha based on (https://bugzilla.redhat.com/show_bug.cgi?id=1347286#c2)and will be provided ASAP.

Please hold on with the doc update until we provide the new steps.

Comment 4 Shashank Raj 2016-06-17 09:56:57 UTC
Below are the final upgrade steps for nfs-ganesha which needs to go in installation guide:

*************************************************************************

8.2. Updating NFS-Ganesha in the Offline Mode

Execute the following steps to update the NFS-Ganesha packages from Red Hat Gluster Storage 3.1.1/3.1.2 to Red Hat Gluster Storage 3.1.3:

Note

 NFS-Ganesha does not support in-service update, hence all the running services and IO's have to be stopped before starting the update process. 

1. Back up all the volume export files under /etc/ganesha/exports and ganesha.conf under /etc/ganesha, in a backup directory on all the nodes:

for example:

# cp /etc/ganesha/exports/export.v1.conf backup/
# cp /etc/ganesha/exports/export.v2.conf backup/
# cp /etc/ganesha/exports/export.v3.conf backup/
# cp /etc/ganesha/exports/export.v4.conf backup/
# cp /etc/ganesha/exports/export.v5.conf backup/
cp /etc/ganesha/ganesha.conf backup/

2. Disable nfs-ganesha on the cluster using below command:

# gluster nfs-ganesha disable

for example:

# gluster nfs-ganesha disable
This will take a few minutes to complete. Please wait ..
nfs-ganesha : success 

3. Disable the shared volume in cluster using:

# gluster volume set all cluster.enable-shared-storage disable

for example:

# gluster volume set all cluster.enable-shared-storage disable
Disabling cluster.enable-shared-storage will delete the shared storage volume(gluster_shared_storage), which is used by snapshot scheduler, geo-replication and NFS-Ganesha. Do you still want to continue? (y/n) y
volume set: success


4. Stop the glusterd service and kill the running gluster processes on all the nodes using the following commands:

# systemctl stop glusterd
# pkill glusterfs
# pkill glusterfsd

5. Ensure all gluster processes are stopped using the following command and if there are any gluster processes still running, terminate the process using kill, on all the nodes:

# pgrep gluster

6. Stop the pcsd service on all the nodes:

# systemctl stop pcsd

7. Update the NFS-Ganesha packages on all the nodes by executing the following command:

# yum update nfs-ganesha
# yum update glusterfs-ganesha

Note

This will install other dependent packages (if any). 

Verify on all the nodes that the required packages are updated, the nodes are fully functional and are using the correct versions. If anything does not seem correct, then do not proceed until the situation is resolved. Contact the Red Hat Global Support Services for assistance if needed. 

8.  Start the glusterd and pcsd service as below on all the nodes:

# systemctl start glusterd
# systemctl start pcsd

9. a) Remove the old ganesha.conf file and rename the newly created ganesha.conf.rpmsave to ganesha.conf under /etc/ganesha

b) Copy the volume's export info from backup copy of ganesha.conf to the newly renamed ganesha.conf under /etc/ganesha:
 
export entries will look like as below in backup copy of ganesha.conf:

%include "/etc/ganesha/exports/export.v1.conf"
%include "/etc/ganesha/exports/export.v2.conf"
%include "/etc/ganesha/exports/export.v3.conf"
%include "/etc/ganesha/exports/export.v4.conf"
%include "/etc/ganesha/exports/export.v5.conf"

c) Copy the backup volume export files from backup directory to /etc/ganesha/exports

# cp export.* /etc/ganesha/exports/

10. Change the firewall settings for the new services and ports as mentioned under important section of 7.2.4.NFS-Ganesha in 3.1.3 Administration guide.

11. Enable the shared volume in the cluster:

# gluster volume set all cluster.enable-shared-storage enable

for example:

# gluster volume set all cluster.enable-shared-storage enable
volume set: success


12. Enable nfs-ganesha on the cluster:

# gluster nfs-ganesha enable

for example:

# gluster nfs-ganesha enable
Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted pool. Do you still want to continue?
 (y/n) y
This will take a few minutes to complete. Please wait ..
nfs-ganesha : success 

Important

Verify that all the nodes are fully functional. If anything does not seem correct, then do not proceed until the situation is resolved. Contact Red Hat Global Support Services for assistance if required. 

**************************************************************************

Comment 5 Divya 2016-06-17 11:59:52 UTC
Shashank,

I have updated the doc based on Comment 4. 

Link to the latest doc: http://jenkinscat.gsslab.pnq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.1-Installation_Guide%20%28html-single%29/lastStableBuild/artifact/tmp/en-US/html-single/index.html#NFS_Ganesha_Update

I am fixing a few formatting issues and editing the text. Could you review for technical accuracy and let me know if it requires any change?

One question in the prelude " Execute the following steps to update the NFS-Ganesha packages from Red Hat Gluster Storage 3.1.1/3.1.2 to Red Hat Gluster Storage 3.1.3:"

We do not have a separate section for update from 3.1 to 3.1.x. Hence, I have updated it as "Execute the following steps to update the NFS-Ganesha service from Red Hat Gluster Storage 3.1.x to Red Hat Gluster Storage 3.1.1 or later:"

Let me know if this is fine.

Thanks!


(In reply to Shashank Raj from comment #4)
> Below are the final upgrade steps for nfs-ganesha which needs to go in
> installation guide:
> 
> *************************************************************************
> 
> 8.2. Updating NFS-Ganesha in the Offline Mode
> 
> Execute the following steps to update the NFS-Ganesha packages from Red Hat
> Gluster Storage 3.1.1/3.1.2 to Red Hat Gluster Storage 3.1.3:
> 
> Note
> 
>  NFS-Ganesha does not support in-service update, hence all the running
> services and IO's have to be stopped before starting the update process. 
> 
> 1. Back up all the volume export files under /etc/ganesha/exports and
> ganesha.conf under /etc/ganesha, in a backup directory on all the nodes:
> 
> for example:
> 
> # cp /etc/ganesha/exports/export.v1.conf backup/
> # cp /etc/ganesha/exports/export.v2.conf backup/
> # cp /etc/ganesha/exports/export.v3.conf backup/
> # cp /etc/ganesha/exports/export.v4.conf backup/
> # cp /etc/ganesha/exports/export.v5.conf backup/
> cp /etc/ganesha/ganesha.conf backup/
> 
> 2. Disable nfs-ganesha on the cluster using below command:
> 
> # gluster nfs-ganesha disable
> 
> for example:
> 
> # gluster nfs-ganesha disable
> This will take a few minutes to complete. Please wait ..
> nfs-ganesha : success 
> 
> 3. Disable the shared volume in cluster using:
> 
> # gluster volume set all cluster.enable-shared-storage disable
> 
> for example:
> 
> # gluster volume set all cluster.enable-shared-storage disable
> Disabling cluster.enable-shared-storage will delete the shared storage
> volume(gluster_shared_storage), which is used by snapshot scheduler,
> geo-replication and NFS-Ganesha. Do you still want to continue? (y/n) y
> volume set: success
> 
> 
> 4. Stop the glusterd service and kill the running gluster processes on all
> the nodes using the following commands:
> 
> # systemctl stop glusterd
> # pkill glusterfs
> # pkill glusterfsd
> 
> 5. Ensure all gluster processes are stopped using the following command and
> if there are any gluster processes still running, terminate the process
> using kill, on all the nodes:
> 
> # pgrep gluster
> 
> 6. Stop the pcsd service on all the nodes:
> 
> # systemctl stop pcsd
> 
> 7. Update the NFS-Ganesha packages on all the nodes by executing the
> following command:
> 
> # yum update nfs-ganesha
> # yum update glusterfs-ganesha
> 
> Note
> 
> This will install other dependent packages (if any). 
> 
> Verify on all the nodes that the required packages are updated, the nodes
> are fully functional and are using the correct versions. If anything does
> not seem correct, then do not proceed until the situation is resolved.
> Contact the Red Hat Global Support Services for assistance if needed. 
> 
> 8.  Start the glusterd and pcsd service as below on all the nodes:
> 
> # systemctl start glusterd
> # systemctl start pcsd
> 
> 9. a) Remove the old ganesha.conf file and rename the newly created
> ganesha.conf.rpmsave to ganesha.conf under /etc/ganesha
> 
> b) Copy the volume's export info from backup copy of ganesha.conf to the
> newly renamed ganesha.conf under /etc/ganesha:
>  
> export entries will look like as below in backup copy of ganesha.conf:
> 
> %include "/etc/ganesha/exports/export.v1.conf"
> %include "/etc/ganesha/exports/export.v2.conf"
> %include "/etc/ganesha/exports/export.v3.conf"
> %include "/etc/ganesha/exports/export.v4.conf"
> %include "/etc/ganesha/exports/export.v5.conf"
> 
> c) Copy the backup volume export files from backup directory to
> /etc/ganesha/exports
> 
> # cp export.* /etc/ganesha/exports/
> 
> 10. Change the firewall settings for the new services and ports as mentioned
> under important section of 7.2.4.NFS-Ganesha in 3.1.3 Administration guide.
> 
> 11. Enable the shared volume in the cluster:
> 
> # gluster volume set all cluster.enable-shared-storage enable
> 
> for example:
> 
> # gluster volume set all cluster.enable-shared-storage enable
> volume set: success
> 
> 
> 12. Enable nfs-ganesha on the cluster:
> 
> # gluster nfs-ganesha enable
> 
> for example:
> 
> # gluster nfs-ganesha enable
> Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted
> pool. Do you still want to continue?
>  (y/n) y
> This will take a few minutes to complete. Please wait ..
> nfs-ganesha : success 
> 
> Important
> 
> Verify that all the nodes are fully functional. If anything does not seem
> correct, then do not proceed until the situation is resolved. Contact Red
> Hat Global Support Services for assistance if required. 
> 
> **************************************************************************

Comment 6 Shashank Raj 2016-06-17 13:12:49 UTC
Please modify the initial statement as below:

>>>>> Execute the following steps to update the NFS-Ganesha packages from Red Hat Gluster Storage 3.1.x to Red Hat Gluster Storage 3.1.3:

Apart from that, please update the below changes as below:

>>>> 4.  Stop the glusterd service and kill any running gluster process on all the nodes by executing the following commands: 

# service glusterd stop

Please change it to systemctl stop glusterd

>>>> 6.Stop the nfs-ganesha service on all the nodes of the cluster by executing the following command: 

# systemctl stop pcsd

Please change as below:

6. Stop the pcsd service on all the nodes:

# systemctl stop pcsd

>>>> In the note under point 7, please change the first statement as below:

This will install other dependent packages (if any).

>>>> 9. a. Remove the old ganesha.conf file and rename the newly created ganesha.conf.rpm save to ganesha.conf under /etc/ganesha. 

Please change ganesha.conf.rpm save as ganesha.conf.rpmsave

>>>> 10. Enable the firewall settings for the new services and ports. Information on how to enable the services is available in Red Hat Gluster Storage 3.1 Administration Guide. 

the current link is pointing to an old document. Please change it to latest 3.1.3 administration guide

Comment 8 Shashank Raj 2016-06-18 07:13:08 UTC
Thanks laura. 

The document content looks perfect now.

Only clarification i need is:

In point 10, the reference doc link says "Red Hat Gluster Storage 3.1 Administration Guide." which should be 3.1.3 instead.

Is it something which will get changed once we push 3.1.3 documents?

if that is the case its perfect, else we need to change it.

Once you confirm please move it to ON_QA.

Comment 10 Shashank Raj 2016-06-18 13:02:52 UTC
Verified the doc content in below provided link:

http://jenkinscat.gsslab.pnq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.1-Installation_Guide%20%28html-single%29/lastStableBuild/artifact/tmp/en-US/html-single/index.html#NFS_Ganesha_Update

and all the relevant steps related to nfs-ganesha upgrade have been provided and no further modifications are required.

Based on the above observation, marking this bug as Verified.


Note You need to log in before you can comment on or make changes to this bug.