Description of problem: Whenever oc delete pvc command is run, pvc gets deleted successfully, but the deletion of pv is failing Version-Release number of selected component (if applicable): cns-3.6 How reproducible: Everytime Steps to Reproduce: 1. Create a SC 2. Create a PV, PVC 3. Delete a PVC Actual results: The volumes corresponding to PVC should be deleted Expected results: PVC is getting deleted but the PV goes into failed state Additional info: [root ~]# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE storage-claim1 Bound pvc-c9cd9320-4b38-11e7-ba89-005056848cc9 1Gi RWO fast 1h storage-claim2 Bound pvc-d017751e-4b38-11e7-ba89-005056848cc9 1Gi RWO fast 1h storage-claim3 Bound pvc-d65c5b26-4b38-11e7-ba89-005056848cc9 2Gi RWO fast 1h [root ~]# oc get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-c9cd9320-4b38-11e7-ba89-005056848cc9 1Gi RWO Delete Bound storage-project/storage-claim1 fast 1h pvc-d017751e-4b38-11e7-ba89-005056848cc9 1Gi RWO Delete Bound storage-project/storage-claim2 fast 1h pvc-d65c5b26-4b38-11e7-ba89-005056848cc9 2Gi RWO Delete Bound storage-project/storage-claim3 fast 1h [root ~]# oc delete pvc storage-claim3 persistentvolumeclaim "storage-claim3" deleted [root ~]# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE storage-claim1 Bound pvc-c9cd9320-4b38-11e7-ba89-005056848cc9 1Gi RWO fast 1h storage-claim2 Bound pvc-d017751e-4b38-11e7-ba89-005056848cc9 1Gi RWO fast 1h [root ~]# oc get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-c9cd9320-4b38-11e7-ba89-005056848cc9 1Gi RWO Delete Bound storage-project/storage-claim1 fast 1h pvc-d017751e-4b38-11e7-ba89-005056848cc9 1Gi RWO Delete Bound storage-project/storage-claim2 fast 1h pvc-d65c5b26-4b38-11e7-ba89-005056848cc9 2Gi RWO Delete Failed storage-project/storage-claim3 fast 1h [root ~]# oc describe pv pvc-d65c5b26-4b38-11e7-ba89-005056848cc9 Name: pvc-d65c5b26-4b38-11e7-ba89-005056848cc9 Labels: <none> Annotations: pv.beta.kubernetes.io/gid=2002 pv.kubernetes.io/bound-by-controller=yes pv.kubernetes.io/provisioned-by=kubernetes.io/glusterfs StorageClass: fast Status: Failed Claim: storage-project/storage-claim3 Reclaim Policy: Delete Access Modes: RWO Capacity: 2Gi Message: Volume has no class annotation Source: Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime) EndpointsName: glusterfs-dynamic-storage-claim3 Path: vol_6c6b19af3c63069854d33f57fce9c032 ReadOnly: false Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 33s 33s 1 persistent-volume-controller Warning VolumeFailedDelete Volume has no class annotation
Fix is in kubernetes upstream but not yet in openshift upstream. https://github.com/kubernetes/kubernetes/pull/44035/files Issue is because annotations missing in the dynamically created pv's. Refer: https://github.com/kubernetes/kubernetes/issues/43929 workaround: oc edit pv <pv-name> Add the below line in annotation section: volume.beta.kubernetes.io/storage-class: "<strageclass-name>" This will be fixed in openshift. I will keep track of the fix.
Tejas, Can you please check whether this issue is fixed in latest OCP builds ?
(In reply to Humble Chirammal from comment #5) > Tejas, Can you please check whether this issue is fixed in latest OCP builds > ? Also please let me know the OCP build version used in your setup.
This indeed looks like a bug in OpenShift, current 3.6 does not have https://github.com/kubernetes/kubernetes/pull/43982
Thanks Jan for update and filing PR # https://github.com/openshift/origin/pull/14667
Origin PR: https://github.com/openshift/origin/pull/14667
Above PR is merged, so this issue should be fixed from next OCP 3.6 builds.
*** Bug 1461688 has been marked as a duplicate of this bug. ***
Verified in v3.6.126.1 PV and volume are auto deleted after PVC is deleted.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:1716
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days