Bug 1547884 - Storage class "Retain" reclaim policy does not work on nfs/ efs / cephFS-provisioner
Summary: Storage class "Retain" reclaim policy does not work on nfs/ efs / cephFS-prov...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.9.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.10.0
Assignee: Tomas Smetana
QA Contact: Wenqi He
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-02-22 08:26 UTC by Wenqi He
Modified: 2020-01-31 18:58 UTC (History)
10 users (show)

Fixed In Version: openshift-external-storage-0.0.2-1.gitd3c94f0.el7
Doc Type: Known Issue
Doc Text:
Cause: The external NFS and EFS provisioners do not respect the ReclaimPolicy of volume StorageClass. Consequence: The dynamically provisioned NFS and EFS volumes can only have the default "Delete" ReclaimPolicy. Workaround (if any): N/A Result: The NFS and EFS external provisioners would provision new volumes having the default (i.e. "Delete") policy ignoring the ReclaimPolicy of the volume's StorageClass.
Clone Of:
Environment:
Last Closed: 2018-07-30 19:09:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:1816 0 None None None 2018-07-30 19:10:18 UTC

Description Wenqi He 2018-02-22 08:26:53 UTC
Description of problem:
Create a sc with reclaim policy of Retain, after dynamic provision, the PV's reclaim policy is not Retain

Version-Release number of selected component (if applicable):
openshift v3.9.0-0.42.0
kubernetes v1.9.1+a0ce1bc657

How reproducible:
Always

Steps to Reproduce:
1.Set up to create nfs-provisioner pod on OCP
Create service account, update scc, clusterrole and etc
2.Create a sc with Retain reclaim policy 
3.Create a pvc consume this sc
4.Check the pv created by dynamic provsion

Actual results:
The PV's reclaim policy still "Delete"

Expected results:
The PV's reclaim policy should be "Retain"

PV Dump:

# oc get pv pvc-83de7e45-17a5-11e8-9c6c-000d3a1aa471 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    EXPORT_block: "\nEXPORT\n{\n\tExport_Id = 1;\n\tPath = /export/pvc-83de7e45-17a5-11e8-9c6c-000d3a1aa471;\n\tPseudo
      = /export/pvc-83de7e45-17a5-11e8-9c6c-000d3a1aa471;\n\tAccess_Type = RW;\n\tSquash
      = no_root_squash;\n\tSecType = sys;\n\tFilesystem_id = 1.1;\n\tFSAL {\n\t\tName
      = VFS;\n\t}\n}\n"
    Export_Id: "1"
    Project_Id: "0"
    Project_block: ""
    Provisioner_Id: 79969a2b-17a5-11e8-9202-0a580a800024
    kubernetes.io/createdby: nfs-dynamic-provisioner
    pv.kubernetes.io/provisioned-by: example.com/nfs
  creationTimestamp: 2018-02-22T07:53:48Z
  name: pvc-83de7e45-17a5-11e8-9c6c-000d3a1aa471
  resourceVersion: "93392"
  selfLink: /api/v1/persistentvolumes/pvc-83de7e45-17a5-11e8-9c6c-000d3a1aa471
  uid: 83fc5a81-17a5-11e8-9c6c-000d3a1aa471
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: pvc-j5ljo
    namespace: j5ljo
    resourceVersion: "93385"
    uid: 83de7e45-17a5-11e8-9c6c-000d3a1aa471
  nfs:
    path: /export/pvc-83de7e45-17a5-11e8-9c6c-000d3a1aa471
    server: 10.128.0.36
  persistentVolumeReclaimPolicy: Delete
  storageClassName: sc-j5ljo
status:
  phase: Bound

PVC Dump:
# oc get pvc -n j5ljo -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    annotations:
      control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"7996a07b-17a5-11e8-9202-0a580a800024","leaseDurationSeconds":15,"acquireTime":"2018-02-22T07:53:48Z","renewTime":"2018-02-22T07:53:50Z","leaderTransitions":0}'
      pv.kubernetes.io/bind-completed: "yes"
      pv.kubernetes.io/bound-by-controller: "yes"
      volume.beta.kubernetes.io/storage-provisioner: example.com/nfs
    creationTimestamp: 2018-02-22T07:53:48Z
    finalizers:
    - kubernetes.io/pvc-protection
    name: pvc-j5ljo
    namespace: j5ljo
    resourceVersion: "93400"
    selfLink: /api/v1/namespaces/j5ljo/persistentvolumeclaims/pvc-j5ljo
    uid: 83de7e45-17a5-11e8-9c6c-000d3a1aa471
  spec:
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 1Gi
    storageClassName: sc-j5ljo
    volumeName: pvc-83de7e45-17a5-11e8-9c6c-000d3a1aa471
  status:
    accessModes:
    - ReadWriteOnce
    capacity:
      storage: 1Gi
    phase: Bound
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""


StorageClass Dump (if StorageClass used by PV/PVC):
# oc get sc sc-j5ljo -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  creationTimestamp: 2018-02-22T07:53:45Z
  name: sc-j5ljo
  resourceVersion: "93380"
  selfLink: /apis/storage.k8s.io/v1/storageclasses/sc-j5ljo
  uid: 8245d944-17a5-11e8-9c6c-000d3a1aa471
provisioner: example.com/nfs
reclaimPolicy: Retain


Additional info:

Comment 1 Chao Yang 2018-02-22 08:28:07 UTC
EFS has same issue.

Comment 2 Tomas Smetana 2018-02-22 12:39:36 UTC
I will try to backport the upstream patchs that fixed this:

https://github.com/kubernetes-incubator/external-storage/pull/419/commits/48b796d4a1d587adf1abe49ebff4119df24c7aba

However it is possible the patch might require updating the dependencies which equals to rebasing the package. That's probably something we should avoid at this stage.

Comment 3 Humble Chirammal 2018-02-23 07:54:20 UTC
(In reply to Tomas Smetana from comment #2)
> I will try to backport the upstream patchs that fixed this:
> 
> https://github.com/kubernetes-incubator/external-storage/pull/419/commits/
> 48b796d4a1d587adf1abe49ebff4119df24c7aba
> 
> However it is possible the patch might require updating the dependencies
> which equals to rebasing the package. That's probably something we should
> avoid at this stage.


Tomas, ideally its not required to backport it. If you can retrigger or create new containers of EFS and NFS containers in external storage repo, it should be supported by default.

Comment 4 Humble Chirammal 2018-02-23 07:58:22 UTC
(In reply to Humble Chirammal from comment #3)
> (In reply to Tomas Smetana from comment #2)
> > I will try to backport the upstream patchs that fixed this:
> > 
> > https://github.com/kubernetes-incubator/external-storage/pull/419/commits/
> > 48b796d4a1d587adf1abe49ebff4119df24c7aba
> > 
> > However it is possible the patch might require updating the dependencies
> > which equals to rebasing the package. That's probably something we should
> > avoid at this stage.
> 
> 
> Tomas, ideally its not required to backport it. If you can retrigger or
> create new containers of EFS and NFS containers in external storage repo, it
> should be supported by default.

We can take help from Brad or Jan to trigger upstream containers, it should have this support. Otherwise if you can get the latest tar ball from upstream ( https://github.com/kubernetes-incubator/external-storage/releases) and if we can trigger downstream containers from it, it will also solve this problem.

Comment 5 Tomas Smetana 2018-02-23 08:22:53 UTC
That's the problem: we need to ship containerized version of the code we tested. And that is definitely not the recent upstream: we either backport or just live with the fact there are known issues. And I'm afraid it's going to be the second case here.

Changing target release to 3.10.0, adding "Rebase" keyword.

Comment 6 Jianwei Hou 2018-04-24 09:48:22 UTC
Summary: The following external provisioners we support has this problem
- NFS
- EFS
- CephFS

Comment 8 Wenqi He 2018-05-16 07:34:03 UTC
Tested on below version:
# oc version
openshift v3.10.0-0.46.0
kubernetes v1.10.0+b81c8f8

This issue still repro on NFS and EFS provisioner.

Comment 13 Wenqi He 2018-06-05 10:40:21 UTC
NFS provisioner works from my today's testing, I used the image v1.0.9, will continue to test EFS tomorrow. Thanks.

Comment 17 Wenqi He 2018-06-13 03:19:10 UTC
We have tested on below version:
openshift v3.10.0-0.66.0
kubernetes v1.10.0+b81c8f8

# uname -a
Linux wehe-master-etcd-nfs-1 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.5 (Maipo)

"Retain" reclaim policy works on NFS/EFS/CephFS provsioner.

And we have separated email to track the image tag issue. Thanks

Comment 18 Tomas Smetana 2018-06-19 06:13:51 UTC
Thank you. I'm myself not totally sure about the correct versioning. I will try to find somebody who could shed some light into this.

Comment 20 errata-xmlrpc 2018-07-30 19:09:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1816


Note You need to log in before you can comment on or make changes to this bug.