$ oc get all No resources found. Then the dev deleted and re-created the pvc's, but they show up as Lost. $ oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE example-datadir-test-0 Lost example-pv00 0 1m example-datadir-test-1 Lost example-pv01 0 1m example-datadir-test-2 Lost example-pv02 0 1m example-datadir-test-3 Lost example-pv03 0 1m example-datadir-test-4 Lost example-pv04 0 $ oc describe pvc example-datadir-test-0 Name: example-datadir-test-0 Namespace: example StorageClass: Status: Lost Volume: example-pv00 Labels: <none> Annotations: openshift.io/generated-by=OpenShiftNewApp pv.kubernetes.io/bind-completed=yes Capacity: 0 Access Modes: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 2m 2m 1 persistentvolume-controller Warning ClaimMisbound Two claims are bound to the same volume, this one is bound incorrectly When I took a look, the pvc's were in state above. I attempted to recreate the pv's with cluster admin rights: # for i in example-pv00 example-pv01 example-pv02 example-pv03 example-pv04 example-pv05 > do > oc delete pv $i > done persistentvolume "example-pv00" deleted persistentvolume "example-pv01" deleted persistentvolume "example-pv02" deleted persistentvolume "example-pv03" deleted persistentvolume "example-pv04" deleted persistentvolume "example-pv05" deleted [root@master03 ~]# cd pv [root@master03 pv]# ls example-pv00 example-pv01 example-pv010 example-pv02 example-pv03 example-pv04 example-pv05 example-pv06 example-pv07 example-pv08 example-pv09 [root@master03 pv]# for i in example-pv00 example-pv01 example-pv02 example-pv03 example-pv04 example-pv05; do oc create -f $i; done persistentvolume "example-pv00" created persistentvolume "example-pv01" created persistentvolume "example-pv02" created persistentvolume "example-pv03" created persistentvolume "example-pv04" created persistentvolume "example-pv05" created [root@master03 pv]# oc get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE example-pv00 1Gi RWX Recycle Bound example/example-datadir-test-0 2s example-pv01 1Gi RWX Recycle Bound example/example-datadir-test-1 2s example-pv02 1Gi RWX Recycle Bound example/example-datadir-test-2 2s example-pv03 1Gi RWX Recycle Bound example/example-datadir-test-3 2s example-pv04 1Gi RWX Recycle Bound example/example-datadir-test-4 2s example-pv05 1Gi RWX Recycle Available 1s example-pv06 1Gi RWX Recycle Available 5d example-pv07 1Gi RWX Recycle Available 5d example-pv08 1Gi RWX Recycle Available 5d example-pv09 1Gi RWX Recycle Available 5d example-pv10 1Gi RWX Recycle Available 5d The newly created pv's are immediately showing Claims against them, even though the app which uses them is not running anywhere (is destroyed). Version-Release number of selected component (if applicable): 3.5 Attaching more data in private comments
if customer re-create the pv's, claims are immediately shown against them. They are not mounted anywhere by the way (they are nfs). [root@master03 ~]# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE example-datadir-test-0 Bound example-pv00 1Gi RWX 2h example-datadir-test-1 Bound example-pv01 1Gi RWX 2h example-datadir-test-2 Bound example-pv02 1Gi RWX 2h example-datadir-test-3 Bound example-pv03 1Gi RWX 2h example-datadir-test-4 Bound example-pv04 1Gi RWX 2h [root@master03 ~]# oc describe pvc example-datadir-test-0 Name: example-datadir-test-0 Namespace: example StorageClass: Status: Bound Volume: example-pv00 Labels: <none> Capacity: 1Gi Access Modes: RWX No events. [root@master03 ~]# oc delete pvc example-datadir-test-4 persistentvolumeclaim "example-datadir-test-4" deleted [root@master03 ~]# set -o vi [root@master03 ~]# oc delete pvc example-datadir-test-3 persistentvolumeclaim "example-datadir-test-3" deleted [root@master03 ~]# oc delete pvc example-datadir-test-2 persistentvolumeclaim "example-datadir-test-2" deleted [root@master03 ~]# oc delete pvc example-datadir-test-1 persistentvolumeclaim "example-datadir-test-1" deleted [root@master03 ~]# oc delete pvc example-datadir-test-0 persistentvolumeclaim "example-datadir-test-0" deleted [root@master03 ~]# oc get pvc No resources found. [root@master03 ~]# oc get pvc --all-namespaces [root@master03 ~]# oc get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE example-pv00 1Gi RWX Recycle Failed example/example-datadir-test-0 25m example-pv01 1Gi RWX Recycle Failed example/example-datadir-test-1 25m example-pv02 1Gi RWX Recycle Failed example/example-datadir-test-2 25m example-pv03 1Gi RWX Recycle Failed example/example-datadir-test-3 25m example-pv04 1Gi RWX Recycle Failed example/example-datadir-test-4 25m example-pv05 1Gi RWX Recycle Available 25m example-pv06 1Gi RWX Recycle Available 5d example-pv07 1Gi RWX Recycle Available 5d example-pv08 1Gi RWX Recycle Available 5d example-pv09 1Gi RWX Recycle Available 5d example-pv10 1Gi RWX Recycle Available 5d
I am not sure what is the bug here. This is how PVC provisioning works. If you have PVs which are unallocated and you create PVCs that can use those PVs, they are automatically bound. Also, deleting a PV before deleting PVC naturally will result in PVC being in "Lost" state. 2 claims bound to same PV seems like a problem, but I have seen that happen when users create pvcs with same name as before.
Can you describe precisely what should have been expected behaviour and what actually happened?
When you create a pvc, the expected behavior is for it to be unbound/available, until a PV is made available, then it is bound. The PVCs entered lost state immediately after creation, not after deleting the PVs. I would only expect PVCs to enter "Lost" state if they first: 1) Created 2) Bound to PV 3) Delete PV Not simply with: 1) Created I will see if customer is able to reproduce this willingly or if it was a one-time occurrence
Ok, turns out that the environment had been upgraded to 3.6 where the recycler is deprecated. Now the recycler pod spins up and gets imagepullbackoff on the image. I dont see it in our catalog so I guess it is no longer available in 3.6? Although customer says they have another environment where it is working with 3.6, which is odd... registry.access.redhat.com/openshift3/ose-recycler v3.6.173.0.5 860ad4e36b70 6 weeks ago 970.1 MB
Steven - the recycler function is DEPRECATED.. We do continue to ship the image for a few releases after deprecation, so the fact that time image is missing from 3.6 is a (known) bug. It should be fixed (recycler image built and included), but i'm double checking...
verified on version: oc v3.5.5.31.36 kubernetes v1.5.2+43a9be4 1. When deleting the PVCs, the PVs' status(bound to the deleted PVCs) changed to "Available" as expected. 2. When recreating the PVCs, the PVCs' status changed to "Bound" as expected and used the different PVs as expected.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3049