Description of problem: Delete dynamic provisioning PVC with csi and Retain reclaimpolicy will create another PV Version-Release number of selected component (if applicable): oc v3.10.0-0.47.0 openshift v3.10.0-0.47.0 kubernetes v1.10.0+b81c8f8 csi-provisioner-0.2.0-1.el7.x86_64 csi-attacher-0.2.0-3.git27299be.el7.x86_64 How reproducible: > 90% Steps to Reproduce: 1. Deploy csi per https://github.com/openshift/openshift-docs/pull/8783/files 2. Create a StorageClass "test-sc" with reclaimpolicy=Retain 3. Create a new project "mytest" 4. Create a dynamic provisioning PVC using "test-sc" 5. Delete PVC and recreate pvc with same name in short time 6. Check the PV in the cluster Actual results: When deleting PVC, it will create another PV for the PVC [root@host-172-16-120-103 csi]# oc create -f pvc.yaml persistentvolumeclaim "pvc1" created [root@host-172-16-120-103 csi]# oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc1 Bound kubernetes-dynamic-pv-daf5d3035d8d11e8 1Gi RWO cinder 2s [root@host-172-16-120-103 csi]# oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE kubernetes-dynamic-pv-c2ce886e5d8d11e8 1Gi RWO Retain Released mytest/pvc1 cinder 43s kubernetes-dynamic-pv-daf5d3035d8d11e8 1Gi RWO Retain Bound mytest/pvc1 cinder 3s [root@host-172-16-120-103 csi]# oc delete pvc --all persistentvolumeclaim "pvc1" deleted [root@host-172-16-120-103 csi]# oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE kubernetes-dynamic-pv-c2ce886e5d8d11e8 1Gi RWO Retain Released mytest/pvc1 cinder 52s kubernetes-dynamic-pv-daf5d3035d8d11e8 1Gi RWO Retain Released mytest/pvc1 cinder 12s kubernetes-dynamic-pv-dd3da9245d8d11e8 1Gi RWO Retain Released mytest/pvc1 cinder 8s [root@host-172-16-120-103 csi]# oc create -f pvc.yaml persistentvolumeclaim "pvc1" created [root@host-172-16-120-103 csi]# oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE kubernetes-dynamic-pv-c2ce886e5d8d11e8 1Gi RWO Retain Released mytest/pvc1 cinder 1m kubernetes-dynamic-pv-daf5d3035d8d11e8 1Gi RWO Retain Released mytest/pvc1 cinder 24s kubernetes-dynamic-pv-dd3da9245d8d11e8 1Gi RWO Retain Released mytest/pvc1 cinder 20s kubernetes-dynamic-pv-e7b567025d8d11e8 1Gi RWO Retain Bound mytest/pvc1 cinder 3s kubernetes-dynamic-pv-e884dc1e5d8d11e8 1Gi RWO Retain Released mytest/pvc1 cinder 1s [root@host-172-16-120-103 csi]# oc delete pvc --all persistentvolumeclaim "pvc1" deleted [root@host-172-16-120-103 csi]# oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE kubernetes-dynamic-pv-c2ce886e5d8d11e8 1Gi RWO Retain Released mytest/pvc1 cinder 1m kubernetes-dynamic-pv-daf5d3035d8d11e8 1Gi RWO Retain Released mytest/pvc1 cinder 38s kubernetes-dynamic-pv-dd3da9245d8d11e8 1Gi RWO Retain Released mytest/pvc1 cinder 34s kubernetes-dynamic-pv-e7b567025d8d11e8 1Gi RWO Retain Released mytest/pvc1 cinder 17s kubernetes-dynamic-pv-e884dc1e5d8d11e8 1Gi RWO Retain Released mytest/pvc1 cinder 15s Expected results: No new PV was created. Master Log: Node Log (of failed PODs): PV Dump: PVC Dump: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc1 spec: storageClassName: test-sc accessModes: - ReadWriteOnce resources: requests: storage: 1Gi StorageClass Dump (if StorageClass used by PV/PVC): # cat sc.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: test-sc annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: csi-cinderplugin reclaimPolicy: Retain parameters: Additional info: Some logs from csi-provisioner: E0522 06:46:08.472226 1 leaderelection.go:244] error initially creating leader election record: create not allowed, PVC should already exist E0522 06:46:10.471849 1 leaderelection.go:244] error initially creating leader election record: create not allowed, PVC should already exist E0522 06:46:12.471697 1 leaderelection.go:244] error initially creating leader election record: create not allowed, PVC should already exist E0522 06:46:12.473153 1 leaderelection.go:244] error initially creating leader election record: create not allowed, PVC should already exist E0522 06:50:59.758154 1 controller.go:769] Error watching for provisioning success, can't provision for claim "mytest/pvc1": events is forbidden: User "system:serviceaccount:csi:cinder-csi" cannot list events in the namespace "mytest": User "system:serviceaccount:csi:cinder-csi" cannot list events in project "mytest" I0522 06:50:59.758174 1 leaderelection.go:156] attempting to acquire leader lease... E0522 06:50:59.770627 1 leaderelection.go:273] Failed to update lock: Operation cannot be fulfilled on persistentvolumeclaims "pvc1": the object has been modified; please apply your changes to the latest version and try again
Another wired thing: First set secret.data.cloud.conf to an invalid value, like bug #1580273. Then update secret to a valid value, delete csi-cinder-controller sidecar containers, check PV and PVC. PVC goes into Bound status, but there are 2 PVs created. # oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc1 Bound kubernetes-dynamic-pv-27dd46455e3311e8 1Gi RWO cinder 46m # oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE kubernetes-dynamic-pv-27dd46455e3311e8 1Gi RWO Retain Bound mytest/pvc1 cinder 3m kubernetes-dynamic-pv-44eada235e3311e8 1Gi RWO Retain Available mytest/pvc1 cinder 2m
It seems to be caused by this: I0523 12:02:27.147090 1 controller.go:968] cannot start watcher for PVC csi/myclaim: unknown (get events) E0523 12:02:27.147380 1 controller.go:769] Error watching for provisioning success, can't provision for claim "csi/myclaim": unknown (get events) I updated the policy in https://github.com/openshift/openshift-docs/pull/8783 (again). Please test.
The updated in the PR is worked for me.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1816