Bug 1421125 - [Doc RFE] Document instructions regarding how to upgrade CNS 3.4 to OpenShift Container Platform 3.5.
Summary: [Doc RFE] Document instructions regarding how to upgrade CNS 3.4 to OpenShift...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: doc-Container_Native_Storage_with_OpenShift
Version: cns-3.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: CNS 3.5
Assignee: Divya
QA Contact: krishnaram Karthick
URL:
Whiteboard:
Depends On:
Blocks: 1415610
TreeView+ depends on / blocked
 
Reported: 2017-02-10 12:13 UTC by Anjana Suparna Sriram
Modified: 2017-11-17 05:32 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)

Description Anjana Suparna Sriram 2017-02-10 12:13:39 UTC
Additional info:

Comment 6 Mohamed Ashiq 2017-04-13 11:13:03 UTC
1) 

sec 8.2, step 1, NOTE:

When you run yum update heketi-templates -y command, the heketi-templates package is replaced with cns-deploy. 

->

When you run yum update heketi-templates -y command, cns-deploy package will be installed. heketi-templates package is replaced with cns-deploy. 


2)

8.3 sec, step 2 

remove 
################
# oc delete templates glusterfs

example,

# oc delete templates glusterfs
template “glusterfs” deleted

################

Add a new step, Delete old gluster template,

# oc delete templates glusterfs

example,

# oc delete templates glusterfs
template “glusterfs” deleted

3) 

8.3 sec, step 3

Also split the step 3 as 
##################
Registry new gluster template,

# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml 

For example, 

# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml 
template “glusterfs” created

##################

###############3
 Execute the following commands to start the gluster DeamonSet,
 # oc process glusterfs | oc create -f -

For example,

# oc process glusterfs | oc create -f -
Deamonset “glusterfs” created
################


Good job everything else looks good.

Comment 8 krishnaram Karthick 2017-04-15 06:26:30 UTC
1) I don't see dc for glusterfs for setup configured using cns-deploy 3.4

# oc edit deploymentconfig glusterfs-dc-<IP-ADDR/Hostname> 

2) And heketi template already has type: Recreate

3)  My understanding is that from 3.4, we have daemonset to configure gluster pods. Hence, it is necessary to delete the daemonset and create a new one. Hence, a step to delete daemonset for gluster is necessary

Execute the following command to deploy the Red Hat Gluster Storage pods:

# oc process glusterfs | oc create -f -
daemonset "glusterfs" created

4) from 3.4, we support daemonset for configuring gluster. I don't think 'dc' would be present

# oc delete deploymentconfig  glusterfs-dc-<IP_address/Hostname>

5) It would be good to specify the order of upgrade. Whether we are upgrading heketi first or gluster pods. The current documentation looks confusing.

6) Please revisit and check if steps 2 through 6 is necessary under 'Upgrading your environment'

7) The output for 'oc get pods' is incorrect.

 Execute the following command to verify that the containers are running:

# oc get pods

For example:

# oc get pods
NAME                    READY     STATUS    RESTARTS   AGE
storage-project-router-1-pj9ea     1/1       Running   0          1d
deploy-heketi-1-m7x8g   1/1       Running   0          1m
glusterfs-41lfl         1/1       Running   0          1m
glusterfs-dtyr4         1/1       Running   0          1m
glusterfs-ral2d         1/1       Running   0          1m

8) Following are the steps for updating a DaemonSet

can be,

Following are the steps for updating a DaemonSet for glusterfs

9) An explanation of what we are planning to do and how the upgrade is done is necessary, rather than just having steps alone

10) Under ⁠9.3. Upgrading your Gluster Pods, We have step 7 & 8 as follows.

----------------------------------------------------------------------------
7)  Execute the following command to delete the old gluster pods: 

8) Repeat Step 6 to delete all the gluster pods. You must ensure that the new pod is running before deleting the next old gluster pod. In this release, we support OnDelete Strategy DaemonSet update strategy. With OnDelete Strategy update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods. 
----------------------------------------------------------------------------------------

 User might delete all gluster pods after reading step 7. It is necessary to give an explanation that a rolling upgrade of gluster pod is necessary. Hence, please explain about rolling reboot before step 7 or in step 7 and ask user to delete gluster pods one by one ensuring the pod has come up.

 11) Apart from checking pod status, is there any other way to check if upgrade is successful? pod status can be misleading, isn't it?

Comment 9 Mohamed Ashiq 2017-04-17 07:19:06 UTC
(In reply to krishnaram Karthick from comment #8)
> 1) I don't see dc for glusterfs for setup configured using cns-deploy 3.4
> 
> # oc edit deploymentconfig glusterfs-dc-<IP-ADDR/Hostname> 
> 

Yeah, you are right. there is no dc for gluster anymore. see next comment


> 2) And heketi template already has type: Recreate
> 

Yeah pre-upgrade step is not required now as this change is made in templates now. So can remove 9.1 completely.

> 3)  My understanding is that from 3.4, we have daemonset to configure
> gluster pods. Hence, it is necessary to delete the daemonset and create a
> new one. Hence, a step to delete daemonset for gluster is necessary
> 

I had a thought that we documented to maintain dc if the customer doesn't want to move ds. May be it was just discussion and we didn't do it. Yeah officially we are not using dc for gluster pods anymore.


Please delete 2,3,4,5,6 from section 9.2 as we have separate steps for upgrading gluster pods.

> Execute the following command to deploy the Red Hat Gluster Storage pods:
> 
> # oc process glusterfs | oc create -f -
> daemonset "glusterfs" created
> 

This is taken care in 9.3

> 4) from 3.4, we support daemonset for configuring gluster. I don't think
> 'dc' would be present
> 
> # oc delete deploymentconfig  glusterfs-dc-<IP_address/Hostname>
> 

point 2 will address this too.

> 5) It would be good to specify the order of upgrade. Whether we are
> upgrading heketi first or gluster pods. The current documentation looks
> confusing.
> 

Yeah when addressing point 2 it will be satisfied. Heketi should be updated first and then gluster pods.

> 6) Please revisit and check if steps 2 through 6 is necessary under
> 'Upgrading your environment'
> 

Point 2. cool. 


> 7) The output for 'oc get pods' is incorrect.
> 
>  Execute the following command to verify that the containers are running:
> 
> # oc get pods
> 
> For example:
> 
> # oc get pods
> NAME                    READY     STATUS    RESTARTS   AGE
> storage-project-router-1-pj9ea     1/1       Running   0          1d
> deploy-heketi-1-m7x8g   1/1       Running   0          1m
> glusterfs-41lfl         1/1       Running   0          1m
> glusterfs-dtyr4         1/1       Running   0          1m
> glusterfs-ral2d         1/1       Running   0          1m
> 

You mean it should be heketi pod(and also alignment) right is there any other change?

> 8) Following are the steps for updating a DaemonSet
> 
> can be,
> 
> Following are the steps for updating a DaemonSet for glusterfs
> 

Anything works for me.

We can change to 

Following are the steps for updating a DaemonSet for glusterfs

> 9) An explanation of what we are planning to do and how the upgrade is done
> is necessary, rather than just having steps alone
> 

Like we have to mention, Cascade=false option while deleting the old daemonset does not delete the gluster pods but delete just the ds. After old DaemonSet is deleted load the new one. Now when we manually delete the old pods, The new one which are created has the configurations of New DaemonSet.

This is all that it does. 

> 10) Under ⁠9.3. Upgrading your Gluster Pods, We have step 7 & 8 as follows.
> 
> ----------------------------------------------------------------------------
> 7)  Execute the following command to delete the old gluster pods: 
> 
> 8) Repeat Step 6 to delete all the gluster pods. You must ensure that the
> new pod is running before deleting the next old gluster pod. In this
> release, we support OnDelete Strategy DaemonSet update strategy. With
> OnDelete Strategy update strategy, after you update a DaemonSet template,
> new DaemonSet pods will only be created when you manually delete old
> DaemonSet pods. 
> -----------------------------------------------------------------------------
> -----------
> 
>  User might delete all gluster pods after reading step 7. It is necessary to
> give an explanation that a rolling upgrade of gluster pod is necessary.
> Hence, please explain about rolling reboot before step 7 or in step 7 and
> ask user to delete gluster pods one by one ensuring the pod has come up.
> 

Add a note saying "Gluster pods should follow rolling upgrade, Hence if one gluster pod is updated successfully only then next gluster pod should be upgraded. Delete the gluster pod one by one ensuring old one is up and running."

>  11) Apart from checking pod status, is there any other way to check if
> upgrade is successful? pod status can be misleading, isn't it?

Pod status is the status of the glusterd in the pod In which case do you think this information is not valid we can add a note. Most of the time it is working fine. Node going down is a case but which after timeouts it updates the failure.

Comment 11 krishnaram Karthick 2017-04-18 02:39:31 UTC
More comments:
1)

9.1 Upgrading your environment

should be,

9.1 Upgrading Heketi server

2) heketi service account should be deleted before creating a new one,

oc create -f /usr/share/heketi/templates/heketi-service-account.yaml 

3) Do we need to delete heketi secret?

oc delete deploymentconfig,secret,route heketi
deploymentconfig "heketi" deleted
route "heketi" deleted
Error from server (NotFound): secrets "heketi" not found

4) heketi service also needs to be deleted before creating new heketi server

oc process heketi | oc create -f -
route "heketi" created
deploymentconfig "heketi" created
Error from server (AlreadyExists): services "heketi" already exists

Please make the changes for heketi and provide the updated link so I can complete the upgrade of heketi testing and proceed with gluster upgrade.

Comment 12 Mohamed Ashiq 2017-04-18 06:17:00 UTC
(In reply to krishnaram Karthick from comment #11)
> More comments:
> 1)
> 
> 9.1 Upgrading your environment
> 
> should be,
> 
> 9.1 Upgrading Heketi server
> 
Agreed.

> 2) heketi service account should be deleted before creating a new one,
> 
> oc create -f /usr/share/heketi/templates/heketi-service-account.yaml 
> 

We can remove the step 4. heketi-service-account will be already present. 


> 3) Do we need to delete heketi secret?
> 
> oc delete deploymentconfig,secret,route heketi
> deploymentconfig "heketi" deleted
> route "heketi" deleted
> Error from server (NotFound): secrets "heketi" not found
> 

Please change the command in step 6

# oc delete deploymentconfig,service,route heketi

> 4) heketi service also needs to be deleted before creating new heketi server
> 
> oc process heketi | oc create -f -
> route "heketi" created
> deploymentconfig "heketi" created
> Error from server (AlreadyExists): services "heketi" already exists
> 

last step change will fix this too.

> Please make the changes for heketi and provide the updated link so I can
> complete the upgrade of heketi testing and proceed with gluster upgrade.

Comment 14 krishnaram Karthick 2017-04-18 10:54:08 UTC
looks good to me, moving the bug to verified.


Note You need to log in before you can comment on or make changes to this bug.