NFS-Ganesha is undergoing enhancements to make it more stable and to recommend as default nfs protocol. This rfe is raised to track the changes required in upgrade section due to following: 1) the shared volume changes coming in for 3.2 which will impact upgrade steps. 2) nfs-ganesha being the default protocol, the recommendations to the existing/new users how they can choose/upgrade between gnfs/nfs-ganesha moving forward.
Below are the testing scenarios applicable for gNFS to NFS-Ganesha a.) migrating nfs client from gNFS to NFS-Ganesha version 3 b.) migrating nfs client from gNFS to NFS-Ganesha version 4 c.) mapping of volume options used for gNFS to NFS-Ganesha configuration options
I think we need to look at following as well: a)gnfs with ctdb to nfs-ganesha b)nfs-ganesha to nfs-ganesha (w.r.t to shared volume changes)
Based on the discussions, following are the scenarios which we should be documenting: 1) Existing customers (3.1.3 or lower) on gNFS with ctdb upgrading to nfs-ganesha > All the ctdb related configuration and services needs to be reverted/stopped. > Any running gluster-nfs/kernel-nfs services needs to be stopped. > Perform the 3.2 upgrade using normal gluster upgrade steps > Follow the configuration steps for setting up ganesha in the cluster. 2) Existing Customers with gNFS who wish to continue using gNFS after upgrade to 3.2 > Just have to enable nfs on the volumes after the upgrade to 3.2 3) Existing Customers on NFS-Ganesha upgrading to nfs-ganesha with 3.2 > Follow the existing steps of ganesha upgrade (along with with changes related to shared volume) 4) New Customers who intend to use gNFS on 3.2 > Just have to enable nfs on the volumes after 3.2 installation. 5) New Customers who intend to use NFS-Ganesha > Follow the existing steps of ganesha install/configuration (along with changes related to shared volume) Above scenarios/steps are just the overview of what needs to be there in doc. Once we start the actual testing of it, detailed steps will be provided.
WRT (2), if a volume has gNFS enabled (i.e, option nfs.disable is off) and is upgraded, we suspect that (by design) the gluster-NFS option (nfs.disable) continues to be off and that would mean gluster-NFS may get started (depending on other services which it expects to be started). We need to test it out and have to document it as well. WRT, gNFS upgrade scenarios, I think of following combinations - a) CTDB/gNFS -> NFS-Ganesha (covered in (1)) b) CTDB/gNFS -> CTDB/gNFS ( same as above, but CTDB setup needs to be redone) c) gNFS (with out CTDB) -> NFS-Ganesha (similar to (1). Though NFS-Ganesha setup disables gNFS it is better to stop gNFS service before upgrade so that it is clear that it shall be disruptive upgrade) d) gNFS(without HA/CTDB) -> gNFS (without HA/CTDB) (covered in (2). As mentioned above, we need not re-enable NFS service. It should be started automatically - i.e, in service uprgrade). Also for new customers intended to use gNFS on 3.2 (point 4 above), have to add a note that gNFS is supported only in maintenance mode and likely to be deprecated in future releases.
Wrt changes in Ganesha 3.1.3 to Ganesha 3.2 upgrade path, following needs to be taken care: In nfs-ganesha section “8.2. Updating NFS-Ganesha in the Offline Mode” > Remove line 1 under point 9 and rename 2 & 3 accordingly > Add following steps after step11 12. Once the shared volume is created, create a folder named “nfs-ganesha” inside /var/run/gluster/shared_storage [root@dhcp42-222 ganesha]# cd /var/run/gluster/shared_storage/ [root@dhcp42-222 shared_storage]# ls [root@dhcp42-222 shared_storage]# mkdir nfs-ganesha [root@dhcp42-222 shared_storage]# ls nfs-ganesha 13. Copy ganesha.conf, ganesha-ha.conf and exports folder from /etc/ganesha to /var/run/gluster/shared_storage/nfs-ganesha [root@dhcp42-222 ~]# cd /etc/ganesha/ [root@dhcp42-222 ganesha]# ls exports ganesha.conf ganesha-ha.conf ganesha-ha.conf.sample [root@dhcp42-222 ganesha]# cp ganesha.conf ganesha-ha.conf /var/run/gluster/shared_storage/nfs-ganesha/ [root@dhcp42-222 ganesha]# ls exports ganesha.conf ganesha-ha.conf ganesha-ha.conf.sample [root@dhcp42-222 ganesha]# cp -r exports/ /var/run/gluster/shared_storage/nfs-ganesha/ > Rename point 12 as 14 NOTE: Since these steps are kept together with initial upgrade test, will confirm it again once we are into install/upgrade cycle.
Hi Shashank, I have made the changes based on comment 8: http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.2-Installation_Guide-branch-master/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html#NFS_Ganesha_Update Let me know if there are any more changes to be made wrt any further testing that was done. Thanks
Bhavana, The changes wrt ganesha to ganesha upgrade looks good. We will provide the other upgrade scenarios and their respective steps once we start testing it.
To verify the content we need to know the steps needed for upgrade. Currently QE is busy working on ON_QA verification and other testing, I would request may be Soumya, jIffin or any one from NFS-Ganesha team to provide few steps to start with documentation steps and the I can verify and add if something is missing.
(In reply to Bhavana from comment #9) > Hi Shashank, > > I have made the changes based on comment 8: > > http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc- > Red_Hat_Gluster_Storage-3.2-Installation_Guide-branch-master/ > lastSuccessfulBuild/artifact/tmp/en-US/html-single/index. > html#NFS_Ganesha_Update > > Let me know if there are any more changes to be made wrt any further testing > that was done. > > Thanks There is minor modification required in the doc which u have provided in section 8.2 Updating NFS-Ganesha in the Offline Mode at point 13 Copy the ganesha.conf, ganesha-ha.conf, and the exports folder from /etc/ganesha to /var/run/gluster/shared_storage/nfs-ganesha u have also mention admin to the change export entries in ganesha.conf using following sed -i 's/\/etc\/ganesha/\/var\/run\/gluster\/shared_storage\/nfs-ganesha/' /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf Above command will update all the export entries
Here is the minutes from yesterdays meeting. We have identified the sections that have to be edited and what new sections have to be added to address the issue listed in the bug (the headings can be edited later): 8.2 --- Updating the NFS server from 3.1 to 3.2 8.2.1: Updating Gluster NFS with CTDB (link to 9.2.4) -> changes to 9.2.4 wrt gluster nfs too. -> Add another step between 14 and 15 in 9.2.4 to enable gluster nfs. without CTDB (link to 8.1) 8.2.2: Updating NFS-Ganesha in offline mode (same as 8.2) 8.2.3: Migrating from Gluster NFS to NFS Ganesha in offline mode (steps to add) Section 8.1 - Add another step between 8 and 9 to enable gluster nfs. Section 8.1 - Add "Important" - From 3.2 Gluster NFS will be disabled by default.
>> Step to enable gluster-nfs Enable gluster-NFS using below command - gluster volume set <volname> nfs.disable off Eg: $ gluster volume set testvol nfs.disable off Gluster NFS is being deprecated in favor of NFS-Ganesha Enter "yes" to continue using Gluster NFS (y/n) y volume set: success >> 8.2.3: Migrating from Gluster NFS to NFS Ganesha in offline mode The following steps have to be performed on each node of the replica pair. 1) To ensure that the CTDB does not start automatically after a reboot run the following command on each node of the CTDB cluster: # chkconfig ctdb off 2) Stop the CTDB service on the Red Hat Gluster Storage node using the following command on each node of the CTDB cluster: # service ctdb stop To verify if the CTDB and NFS services are stopped, execute the following command: ps axf | grep -E '(ctdb|nfs)[d]' 3)Stop the gluster services on the storage server using the following commands: # service glusterd stop # pkill glusterfs # pkill glusterfsd 4) Delete the CTDB volume [Request Surabhi to provide example here] 5) Update the server using the following command: # yum update 6) Reboot the server. 7) Start the glusterd service using the following command: # service glusterd start On Red Hat Enterprise Linux 7: # systemctl start glusterd 8) When all nodes have been upgraded, run the following command to update the op-version of the cluster. This helps to prevent any compatibility issues within the cluster. # gluster volume set all cluster.op-version 30712 Note: This op-version will change for 3.2. Please check with glusterd team on the new value. 9) To install nfs-ganesha packages, refer to below link https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Installation_Guide/chap-Deploying_NFSGanesha.html 10) To configure nfs-ganesha cluster, refer to below link https://documentation-devel.engineering.redhat.com/site/documentation/en-US/Red_Hat_Gluster_Storage/3.1/html/Administration_Guide/sect-NFS.html#sect-NFS_Ganesha Note: Above links are subjected to change based on 3.2 installation/administration guides content. Surabhi/Jiffin, Please correct/update if I have missed out anything.
(In reply to Bhavana from comment #9) > Hi Shashank, > > I have made the changes based on comment 8: > > http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc- > Red_Hat_Gluster_Storage-3.2-Installation_Guide-branch-master/ > lastSuccessfulBuild/artifact/tmp/en-US/html-single/index. > html#NFS_Ganesha_Update > > Let me know if there are any more changes to be made wrt any further testing > that was done. > > Thanks With bug1400816, we need to add below note after step7. Note: Make sure nfs-ganesha packages are installed on all the nodes of the Gluster trusted storage pool even though few of them are not part of NFS-Ganesha cluster.
Hi Soumya, The changes discussed in the meeting, and the comments provided in this bug are incorporated in the installation guide. http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.2-Installation_Guide-branch-bz-1368444-nfs-ganesha-upgrade/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html#idm139694627873008 Please review the same, and let me know if it requires any further updates, before I move this on_qa.
Just delete the sentance "From Red Hat Gluster Storage 3.2 onwards, Gluster NFS will be disabled by default."
Thanks Kaleb, I have gone ahead and made the changes. Since, the bug is still on needinfo on Alok, I am not moving the bug on_qa yet. Here is the updated link: http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.2-Installation_Guide-branch-bz-1368444-nfs-ganesha-upgrade/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html#idm140049637788496
I have incorporated all the comments/suggestion provided in this bug and also the ones discussed over any meetings (added in comment 17) . Following is the updated link: http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.2-Installation_Guide-branch-master/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html#idm140628624065712
Bhavana, There are changes needed in few of the updates here: As the decision has been taken now on gnfs that existing volumes will not get affected so we need to remove the following additions from the upgrade section: 1. http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.2-Installation_Guide-branch-master/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html#Updating_Red_Hat_Storage_in_the_Offline_Mode Enable gluster-NFS using the following command: # gluster volume set volname nfs.disable off For example: # gluster volume set testvol nfs.disable off Gluster NFS is being deprecated in favor of NFS-Ganesha Enter "yes" to continue using Gluster NFS (y/n) y volume set: success 2. http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.2-Installation_Guide-branch-master/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html#In-Service_Software_Upgrade_for_a_CTDB_Setup section 9.2.4.1 bullet 15. Enable gluster-NFS using below command: # gluster volume set <volname> nfs.disable off For example: # gluster volume set testvol nfs.disable off Gluster NFS is being deprecated in favor of NFS-Ganesha Enter "yes" to continue using Gluster NFS (y/n) y volume set: success 3. http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.2-Installation_Guide-branch-master/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html#idm140628624065712 8.2.3. Migrating from Gluster NFS to NFS Ganesha in Offline mode I think we should mention like: If there is an existing NFS-CTDB configuration/setup in place and you would like to migrate to NFS-Ganesha we need to clean up the existing CTDB cluster and then configure a new Ganesha cluster. Now in clean up steps : After step 4. Stop the gluster services on the storage server using the following command: step 5 should be: stop the ctdb volume using: gluster vol stop ctdb 6. delete ctdb volume 7. remove following ctdb related config files from the location : 1. /etc/ctdb/nodes /etc/ctdb/public_addresses /etc/sysconfig/ctdb 8. yum update Step 9 where we are changing the op-version : the op-version needs to be updated. I am sure there already should be a BZ for the same. Will confirm. Please get the changes reviewed by Soumya and send the updated link.
In section : 8.2.2. Updating NFS-Ganesha in the Offline Mode Execute the following steps to update the NFS-Ganesha service from Red Hat Gluster Storage 3.1.x to Red Hat Gluster Storage 3.1.3: Need a change in above line for version numbers. 7. Update the NFS-Ganesha packages on all the nodes by executing the following command: Remove yum update specific pacakges Just give yum update
Also just before step 15. Enable nfs-ganesha on the cluster: # gluster nfs-ganesha enable We need to add a note : Ensure that corosync.conf is not present in /etc/corosync as it prevents ganesha cluster to come up successfully due to a BZ (TBD) which is fixed in 3.2 release.
Before running ganesha enable we need to execute the following: /usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha Ignore #C28
Hi Soumya, I have incorporated all the changes mentioned by Surabhi: http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.2-Installation_Guide-branch-bz-1368444-nfs-ganesha-upgrade/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html#idm140496278412176 Let me know if these changes look Ok as suggested by Surabhi in comment 26 a) Surabhi and I felt that we can omit the suggestion regarding 8.2.3. about adding "If there is an existing NFS-CTDB configuration/setup in place and you would like to migrate to NFS-Ganesha we need to clean up the existing CTDB cluster and then configure a new Ganesha cluster." b) The op-version will be added once the number is finalized. Laura is working on "upgrading to 3.2". I presume she has created abug to update the op-version in the guide. If she has then I ll check with her and this detail there too. c) Comment 28 is ignored, based on Surabhi's comment in Comment 29.
8.2.2. Updating NFS-Ganesha in the Offline Mode As per #c27 following needs to be updated Execute the following steps to update the NFS-Ganesha service from Red Hat Gluster Storage 3.1.x to Red Hat Gluster Storage 3.1.3: After point 11 . need to add another point or a note : Ensure that shared storage is mounted on all the nodes of cluster. then continue with next point.
Hi Surabhi, Here is the updated link: http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.2-Installation_Guide-branch-master/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html#idm140077629289216
Bhavana, As discussed in point 7, we need to give only yum update and not the specific package names.
Hi Surabhi, The changes are made as specified. http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.2-Installation_Guide-branch-master/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html#NFS_Ganesha_Update
Soumya, In following section 8.2.2. Updating NFS-Ganesha in the Offline Mode After step 8 i.e starting glusterd services , we should add to upgrade the op version as well: When all nodes have been upgraded, run the following command to update the op-version of the cluster. This helps to prevent any compatibility issues within the cluster. # gluster volume set all cluster.op-version 31001
I had a discussion with Soumya. The op version step is added in section 8.2.2 after step 8: http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.2-Installation_Guide-branch-master/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html#NFS_Ganesha_Update
The upgrade section looks good. Moving the BZ to verified.Thanks Bhavana!
RHGS 3.2.0 GA completed on 23 March 2017