Description of problem: Create a sc with reclaim policy of Retain, after dynamic provision, the PV's reclaim policy is not Retain Version-Release number of selected component (if applicable): openshift v3.9.0-0.42.0 kubernetes v1.9.1+a0ce1bc657 How reproducible: Always Steps to Reproduce: 1.Set up to create nfs-provisioner pod on OCP Create service account, update scc, clusterrole and etc 2.Create a sc with Retain reclaim policy 3.Create a pvc consume this sc 4.Check the pv created by dynamic provsion Actual results: The PV's reclaim policy still "Delete" Expected results: The PV's reclaim policy should be "Retain" PV Dump: # oc get pv pvc-83de7e45-17a5-11e8-9c6c-000d3a1aa471 -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: EXPORT_block: "\nEXPORT\n{\n\tExport_Id = 1;\n\tPath = /export/pvc-83de7e45-17a5-11e8-9c6c-000d3a1aa471;\n\tPseudo = /export/pvc-83de7e45-17a5-11e8-9c6c-000d3a1aa471;\n\tAccess_Type = RW;\n\tSquash = no_root_squash;\n\tSecType = sys;\n\tFilesystem_id = 1.1;\n\tFSAL {\n\t\tName = VFS;\n\t}\n}\n" Export_Id: "1" Project_Id: "0" Project_block: "" Provisioner_Id: 79969a2b-17a5-11e8-9202-0a580a800024 kubernetes.io/createdby: nfs-dynamic-provisioner pv.kubernetes.io/provisioned-by: example.com/nfs creationTimestamp: 2018-02-22T07:53:48Z name: pvc-83de7e45-17a5-11e8-9c6c-000d3a1aa471 resourceVersion: "93392" selfLink: /api/v1/persistentvolumes/pvc-83de7e45-17a5-11e8-9c6c-000d3a1aa471 uid: 83fc5a81-17a5-11e8-9c6c-000d3a1aa471 spec: accessModes: - ReadWriteOnce capacity: storage: 1Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: pvc-j5ljo namespace: j5ljo resourceVersion: "93385" uid: 83de7e45-17a5-11e8-9c6c-000d3a1aa471 nfs: path: /export/pvc-83de7e45-17a5-11e8-9c6c-000d3a1aa471 server: 10.128.0.36 persistentVolumeReclaimPolicy: Delete storageClassName: sc-j5ljo status: phase: Bound PVC Dump: # oc get pvc -n j5ljo -o yaml apiVersion: v1 items: - apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"7996a07b-17a5-11e8-9202-0a580a800024","leaseDurationSeconds":15,"acquireTime":"2018-02-22T07:53:48Z","renewTime":"2018-02-22T07:53:50Z","leaderTransitions":0}' pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-provisioner: example.com/nfs creationTimestamp: 2018-02-22T07:53:48Z finalizers: - kubernetes.io/pvc-protection name: pvc-j5ljo namespace: j5ljo resourceVersion: "93400" selfLink: /api/v1/namespaces/j5ljo/persistentvolumeclaims/pvc-j5ljo uid: 83de7e45-17a5-11e8-9c6c-000d3a1aa471 spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: sc-j5ljo volumeName: pvc-83de7e45-17a5-11e8-9c6c-000d3a1aa471 status: accessModes: - ReadWriteOnce capacity: storage: 1Gi phase: Bound kind: List metadata: resourceVersion: "" selfLink: "" StorageClass Dump (if StorageClass used by PV/PVC): # oc get sc sc-j5ljo -o yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: 2018-02-22T07:53:45Z name: sc-j5ljo resourceVersion: "93380" selfLink: /apis/storage.k8s.io/v1/storageclasses/sc-j5ljo uid: 8245d944-17a5-11e8-9c6c-000d3a1aa471 provisioner: example.com/nfs reclaimPolicy: Retain Additional info:
EFS has same issue.
I will try to backport the upstream patchs that fixed this: https://github.com/kubernetes-incubator/external-storage/pull/419/commits/48b796d4a1d587adf1abe49ebff4119df24c7aba However it is possible the patch might require updating the dependencies which equals to rebasing the package. That's probably something we should avoid at this stage.
(In reply to Tomas Smetana from comment #2) > I will try to backport the upstream patchs that fixed this: > > https://github.com/kubernetes-incubator/external-storage/pull/419/commits/ > 48b796d4a1d587adf1abe49ebff4119df24c7aba > > However it is possible the patch might require updating the dependencies > which equals to rebasing the package. That's probably something we should > avoid at this stage. Tomas, ideally its not required to backport it. If you can retrigger or create new containers of EFS and NFS containers in external storage repo, it should be supported by default.
(In reply to Humble Chirammal from comment #3) > (In reply to Tomas Smetana from comment #2) > > I will try to backport the upstream patchs that fixed this: > > > > https://github.com/kubernetes-incubator/external-storage/pull/419/commits/ > > 48b796d4a1d587adf1abe49ebff4119df24c7aba > > > > However it is possible the patch might require updating the dependencies > > which equals to rebasing the package. That's probably something we should > > avoid at this stage. > > > Tomas, ideally its not required to backport it. If you can retrigger or > create new containers of EFS and NFS containers in external storage repo, it > should be supported by default. We can take help from Brad or Jan to trigger upstream containers, it should have this support. Otherwise if you can get the latest tar ball from upstream ( https://github.com/kubernetes-incubator/external-storage/releases) and if we can trigger downstream containers from it, it will also solve this problem.
That's the problem: we need to ship containerized version of the code we tested. And that is definitely not the recent upstream: we either backport or just live with the fact there are known issues. And I'm afraid it's going to be the second case here. Changing target release to 3.10.0, adding "Rebase" keyword.
Summary: The following external provisioners we support has this problem - NFS - EFS - CephFS
Tested on below version: # oc version openshift v3.10.0-0.46.0 kubernetes v1.10.0+b81c8f8 This issue still repro on NFS and EFS provisioner.
NFS provisioner works from my today's testing, I used the image v1.0.9, will continue to test EFS tomorrow. Thanks.
We have tested on below version: openshift v3.10.0-0.66.0 kubernetes v1.10.0+b81c8f8 # uname -a Linux wehe-master-etcd-nfs-1 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.5 (Maipo) "Retain" reclaim policy works on NFS/EFS/CephFS provsioner. And we have separated email to track the image tag issue. Thanks
Thank you. I'm myself not totally sure about the correct versioning. I will try to find somebody who could shed some light into this.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1816