Bug 1399855 - PVC is still bound to PV that claims it is released
Summary: PVC is still bound to PV that claims it is released
Keywords:
Status: CLOSED CANTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.2.1
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: ---
Assignee: Hemant Kumar
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-29 22:27 UTC by Eric Jones
Modified: 2020-02-14 18:13 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-06-02 16:09:30 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Eric Jones 2016-11-29 22:27:25 UTC
Description of problem:
Customer has several applications that are using storage provided by PV/PVC. The applications were not needed for some time so they scaled the apps down to 0 but left the PV/PVC in place so the apps could be scaled back up and used immediately. After the apps were needed again, they scaled them all back up from 0 to 1 and the apps immediately were able to use the storage, BUT after scaling the apps back up, the PVs all switched to say their status was Released, even though they are still bound to the PVC and still allowing access to the storage backend.


Addtional Information:

# oc get pv
NAME                     CAPACITY   ACCESSMODES   STATUS     CLAIM                                    REASON    AGE
pv-glustervolume2        1Gi        RWO           Released   dijkrob-trial/postgresql-public                    34d
pv-glustervolume3        1Gi        RWO           Released   dijkrob-trial/postgresql-author                    34d
pv-glustervolume4        1Gi        RWO           Released   dijkrob-trial/postgresql-public                    34d


# oc get pv pv-glustervolume3 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  creationTimestamp: <TIME_DATE>
  name: pv-glustervolume3
  resourceVersion: "4946821"
  selfLink: /api/v1/persistentvolumes/pv-glustervolume3
  uid: <uid>
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: postgresql-author
    namespace: dijkrob-trial
    resourceVersion: "4946818"
    uid: <UID>
  glusterfs:
    endpoints: glusterfs-cluster
    path: glustervol3
  persistentVolumeReclaimPolicy: Retain
status:
  phase: Released
  
  
# oc describe pv pv-glustervolume3
Name:           pv-glustervolume3
Labels:         <none>
Status:         Released
Claim:          dijkrob-trial/postgresql-author
Reclaim Policy: Retain
Access Modes:   RWO
Capacity:       1Gi
Message:
Source:
    Type:               Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
    EndpointsName:      glusterfs-cluster
    Path:               glustervol3
    ReadOnly:           false

Comment 4 Hemant Kumar 2016-12-02 02:40:53 UTC
I have tried this with latest openshift version using dynamic provisioning and I can't reproduce it.

The mount goes away when pods get deleted and it comes back when dc is scaled back to 1. But the whole time, pv's state is `Bound`. 

@Eric - if we can get some logs around the time the the app was scaled up from 0 and pv showed "Released" state, it would be great indeed.

Comment 5 Eric Jones 2016-12-02 21:44:22 UTC
The customer has not provided a timeframe that this occurred over nor any logs so I have requested them and will update here with that information as soon as I have it.

Comment 6 Jan Safranek 2016-12-05 08:55:36 UTC
PV binder has been rewritten from scratch in OpenShift 3.3. It's very likely this bug has been fixed there.

And it's very unlikely that scaling a deployment up or down changes state of a PV or PVC - the code that changes PV state does not read pod states at all. On the other hand, the code is full of surprises, that's why it was rewritten. Logs from master where we could see the PV changes state from Bound to Released could help.


Note You need to log in before you can comment on or make changes to this bug.