Bug 1734701 - PVC should bind with appropriate size PV from local-storage
Summary: PVC should bind with appropriate size PV from local-storage
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 4.2.0
Assignee: Jan Safranek
QA Contact: Liang Xia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-31 09:06 UTC by Liang Xia
Modified: 2019-10-01 15:12 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-01 14:55:03 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Liang Xia 2019-07-31 09:06:01 UTC
Description of problem:
There are two PVs generated by local-storage. One is 1Gi, the other is 2Gi.
Sometimes the 1Gi PVC will bind with 2Gi PV when 1Gi/2Gi PVs are both available.

Version-Release number of selected component (if applicable):
4.2.0-0.nightly-2019-07-28-222114
local-storage-operator.v4.2.0

How reproducible:
Sometimes.

Steps to Reproduce:
1.Deploye local-storage-operator.
2.Prepare disks to generate two PVs (1Gi and 2Gi) .
3.Create a 1Gi PVC, and a pod use it.

Actual results:
1Gi PVC bind with 2Gi PV when 1Gi/2Gi PVs are both available. 

Expected results:
1Gi PVC bind with 1Gi PV.


Additional info:
$ oc get pv,pvc
NAME                                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM      STORAGECLASS          REASON   AGE
persistentvolume/local-pv-158dfe47   2Gi        RWO            Delete           Bound       t5/mypvc   local-block-sc                 12m
persistentvolume/local-pv-7f58a50f   1Gi        RWO            Delete           Available              local-block-sc                 9m23s
persistentvolume/local-pv-c692d3f2   2Gi        RWO            Delete           Available              local-filesystem-sc            46m
persistentvolume/local-pv-f012ba9e   1Gi        RWO            Delete           Available              local-filesystem-sc            3h5m

NAME                          STATUS   VOLUME              CAPACITY   ACCESS MODES   STORAGECLASS     AGE
persistentvolumeclaim/mypvc   Bound    local-pv-158dfe47   2Gi        RWO            local-block-sc   9m43s



$ oc get pvc mypvc -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
  creationTimestamp: "2019-07-31T08:25:27Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    name: dynamic-pvc
  name: mypvc
  namespace: t5
  resourceVersion: "936299"
  selfLink: /api/v1/namespaces/t5/persistentvolumeclaims/mypvc
  uid: c0676594-b36c-11e9-8232-000d3a92e41c
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: local-block-sc
  volumeMode: Block
  volumeName: local-pv-158dfe47
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 2Gi
  phase: Bound

Comment 1 Jan Safranek 2019-07-31 11:24:01 UTC
Can you please post yaml files of all PVs? There must be some rounding error or something simple, PV controller should bind PVCs to the smallest matching PVs.

Comment 2 Liang Xia 2019-08-01 03:03:25 UTC
$ oc get pv local-pv-158dfe47 local-pv-7f58a50f -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    annotations:
      pv.kubernetes.io/bound-by-controller: "yes"
      pv.kubernetes.io/provisioned-by: local-volume-provisioner-qe-lxia-0728-222114-d82x6-worker-centralus1-hnlcx-740f8302-b1a4-11e9-9ac3-000d3a92e440
    creationTimestamp: "2019-07-31T08:23:07Z"
    finalizers:
    - kubernetes.io/pv-protection
    labels:
      storage.openshift.com/local-volume-owner-name: local-disks
      storage.openshift.com/local-volume-owner-namespace: local-storage
    name: local-pv-158dfe47
    resourceVersion: "936294"
    selfLink: /api/v1/persistentvolumes/local-pv-158dfe47
    uid: 6ca57b0b-b36c-11e9-a378-000d3a92e02d
  spec:
    accessModes:
    - ReadWriteOnce
    capacity:
      storage: 2Gi
    claimRef:
      apiVersion: v1
      kind: PersistentVolumeClaim
      name: mypvc
      namespace: t5
      resourceVersion: "936280"
      uid: c0676594-b36c-11e9-8232-000d3a92e41c
    local:
      path: /mnt/local-storage/local-block-sc/sdd
    nodeAffinity:
      required:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - qe-lxia-0728-222114-d82x6-worker-centralus1-hnlcx
    persistentVolumeReclaimPolicy: Delete
    storageClassName: local-block-sc
    volumeMode: Block
  status:
    phase: Bound
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    annotations:
      pv.kubernetes.io/provisioned-by: local-volume-provisioner-qe-lxia-0728-222114-d82x6-worker-centralus2-g79gb-8790b985-b1a4-11e9-bde8-000d3a92e02d
    creationTimestamp: "2019-07-31T08:25:46Z"
    finalizers:
    - kubernetes.io/pv-protection
    labels:
      storage.openshift.com/local-volume-owner-name: local-disks
      storage.openshift.com/local-volume-owner-namespace: local-storage
    name: local-pv-7f58a50f
    resourceVersion: "936464"
    selfLink: /api/v1/persistentvolumes/local-pv-7f58a50f
    uid: cb83acb2-b36c-11e9-a378-000d3a92e02d
  spec:
    accessModes:
    - ReadWriteOnce
    capacity:
      storage: 1Gi
    local:
      path: /mnt/local-storage/local-block-sc/sdd
    nodeAffinity:
      required:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - qe-lxia-0728-222114-d82x6-worker-centralus2-g79gb
    persistentVolumeReclaimPolicy: Delete
    storageClassName: local-block-sc
    volumeMode: Block
  status:
    phase: Available
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Comment 3 Jan Safranek 2019-08-01 16:15:00 UTC
These two PVs are on different nodes. Scheduler has filtered all nodes that have a free PV and gave preference to the one with bigger PV. Yes, it's not optimal, but the severity is not that big, IMO.


Note You need to log in before you can comment on or make changes to this bug.