Description of problem: During the upgrade of a multi-node cluster running OpenShift 3.2 to 3.3 the PV folders on the nfs server were deleted. Before the upgrade projects were existing on the cluster with data stored. Output of `oc get pv` lists the pvs after the upgrade was as before the upgrade including pvcs for the existing projects. eg. Bound <project-name>/<pod> were still intact after the upgrade Error on pods after the upgrade: Unable to mount volumes for pod ..... timeout expired waiting for volumes to attach/mount for pod Also output for journalctl Apr 19 10:37:47 ip-<masked>.us-west-1.compute.internal atomic-openshift-node[8853]: I0419 10:37:47.654818 8853 reconciler.go:294] MountVolume operation started for volume "kubernetes.io/secret/3bd31a6a-24ef-11e7-b704-02f860c51e92-router-token-yf68v" (spec.Name: "router-token-yf68v") to pod "3bd31a6a-24ef-11e7-b704-02f860c51e92" (UID: "3bd31a6a-24ef-11e7-b704-02f860c51e92"). Volume is already mounted to pod, but remount was requested. Version-Release number of selected component (if applicable): After the upgrade oc v3.3.1.17 kubernetes v1.3.0+52492b4 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://ip-<masked>.us-west-1.compute.internal:8443 openshift v3.3.1.17 kubernetes v1.3.0+52492b4 How reproducible: Upgrade a multi-node cluster using nfs server and running Openshift 3.2 to 3.3 using the guide here: https://docs.openshift.com/container-platform/3.3/install_config/upgrading/automated_upgrades.html Steps to Workaround: for i in {51..100}; do sudo mkdir pv${i} ; sudo chown nfsnobody:nfsnobody pv${i}; sudo chmod 777 pv${i}; done sudo mkdir registry ; sudo chown nfsnobody:nfsnobody registry; sudo chmod 777 registry sudo service nfs restart These steps allowed the pods to come back up but with all data still removed. Actual results: pv folders on nfs server deleted after upgrade Expected results: Upgrade to complete and data remain in the pvs Additional info: used the automated upgrade using ansible using the ~/.config/openshift/installer.cfg.yml file
Moving this to storage as the pv should be cleaned up as part of the pod draining operations. The installer shouldn't need to do anything to account for that.
What further info is required?
Can't see the question but spoke to bchilds and the doc I followed is here: https://docs.openshift.com/container-platform/3.3/install_config/upgrading/automated_upgrades.html
lfitzger : I've opened a BZ against Upgrade component to get the upgrade path fixed and documented: https://bugzilla.redhat.com/show_bug.cgi?id=1463393 Since this is functionally not a bug for you, is it OK to close this?
Can you tell the exact version you went from and to?
The version upgrade was 3.2 to 3.3
I didn't note the patch version of 3.2 at the time but it's possible we were below the 3.2.1.31
lfitzger can you verify the 3.3 patch version?
oc v3.3.1.17
Closing this in favor of https://bugzilla.redhat.com/show_bug.cgi?id=1463393 *** This bug has been marked as a duplicate of bug 1463393 ***