Bug 1305417

Summary: Verify claim UID when releasing and binding volumes
Product: OpenShift Container Platform Reporter: Jaspreet Kaur <jkaur>
Component: StorageAssignee: Mark Turansky <mturansk>
Status: CLOSED ERRATA QA Contact: Jianwei Hou <jhou>
Severity: medium Docs Contact:
Priority: high    
Version: 3.1.0CC: aos-bugs, bchilds, erich, jkrieger, lxia, mturansk, pep, szobair, tdawson, xtian
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-05-12 16:28:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1310587    
Bug Blocks: 1267746, 1313560    

Description Jaspreet Kaur 2016-02-08 06:04:39 UTC
Description of problem:
The status of the persistent volume claims and the persistent volumes don’t fit together.Persistent Volumes are marked in the state of released, but are still in use and bound in an persistent volume claim.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Create pv 

cat pv1.json
{
  "apiVersion": "v1",
  "kind": "PersistentVolume",
  "metadata": {
    "name": "pv0001"
  },
  "spec": {
    "capacity": {
        "storage": "1Gi"
    },
    "accessModes": [ "ReadWriteOnce" ],
    "nfs": {
        "path": "/home/data/pv0001",
        "server": "10.65.x.y"
    },
    "persistentVolumeReclaimPolicy": "Recycle"
  }
}

oc create -f pv1.json

- Claim pv.

cat pvc.json
{
    "apiVersion": "v1",
    "kind": "PersistentVolumeClaim",
    "metadata": {
        "name": "claim1"
    },
    "spec": {
        "accessModes": [ "ReadWriteOnce" ],
        "resources": {
            "requests": {
                "storage": "1Gi"
            }
        }
    }
}

oc create -f pvc.json

[chris@master1 ~]$ oc get pv
NAME      LABELS    CAPACITY   ACCESSMODES   STATUS    CLAIM            REASON    AGE
pv0001    <none>    1Gi        RWO           Bound     persist/claim1             57m
[chris@master1 ~]$ oc get pvc
NAME      LABELS    STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
claim1    <none>    Bound     pv0001    1Gi        RWO           57m

3) Delete the claim :

oc delete pvc claim1
persistentvolumeclaim "claim1" deleted

oc get pv
NAME      LABELS    CAPACITY   ACCESSMODES   STATUS     CLAIM            REASON    AGE
pv0001    <none>    1Gi        RWO           Released   persist/claim1             57m

4) Now create the persistent volume claim again:

oc create -f pvc.json
persistentvolumeclaim "claim1" created


5) See the status of pv and pvc where pvc shows bound status with pv0001 while pv0001 shows released state :

oc get pvc
NAME      LABELS    STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
claim1    <none>    Bound     pv0001    1Gi        RWO           4s
[chris@master1 ~]$ oc get pv
NAME      LABELS    CAPACITY   ACCESSMODES   STATUS     CLAIM            REASON    AGE
pv0001    <none>    1Gi        RWO           Released   persist/claim1             58m


Actual results: It is showing inconsistent state.


Expected results: It should have shown consistent state for pv and pvc both i.e bound state.


Additional info: The issue is only namespace/name was used for binding
PVCs instead of using the object UID to guarantee it is a different
object. Below is the upstream link with fix.

https://github.com/kubernetes/kubernetes/pull/20197

Comment 2 Mark Turansky 2016-02-18 14:23:28 UTC
I verified #20197 is in Origin after the most recent rebase.

Comment 3 Jianwei Hou 2016-03-09 07:20:41 UTC
Verified on:
openshift v3.1.1.911
kubernetes v1.2.0-alpha.7-703-gbc4550d
etcd 2.2.5

This bug has been fixed, when PV is not 'Available' , it can not bound any PVC. The PV and PVC do not show inconsistent status now.

Comment 6 Scott Dodson 2016-04-06 14:56:59 UTC
This was fixed in OSE 3.1.1.6 as https://bugzilla.redhat.com/show_bug.cgi?id=1313560 which regrettably encapsulates multiple bug fixes.

This bug is tracking the fix to ensure it's fixed in 3.2

Comment 8 Jianwei Hou 2016-04-07 02:55:41 UTC
Tested on:
openshift v3.2.0.11
kubernetes v1.2.0-36-g4a3f9c5
etcd 2.2.5

The bug is not reproduced too, I believe it's fixed in both 3.1 and 3.2.

Comment 11 errata-xmlrpc 2016-05-12 16:28:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2016:1064