Bug 1563512 - BlockVolume dynamic provisioning error
Summary: BlockVolume dynamic provisioning error
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.10.0
Hardware: Unspecified
OS: Unspecified
high
low
Target Milestone: ---
: 3.10.z
Assignee: Jan Safranek
QA Contact: Qin Ping
URL:
Whiteboard:
Depends On:
Blocks: 1573520
TreeView+ depends on / blocked
 
Reported: 2018-04-04 04:34 UTC by Qin Ping
Modified: 2018-07-30 19:12 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
: 1573520 (view as bug list)
Environment:
Last Closed: 2018-07-30 19:11:39 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:1816 0 None None None 2018-07-30 19:12:03 UTC

Description Qin Ping 2018-04-04 04:34:51 UTC
Description of problem:
BlockVolume dynamic provisioning error

Version-Release number of selected component (if applicable):
openshift v3.10.0-0.15.0
kubernetes v1.9.1+a0ce1bc657

How reproducible:
Always

Steps to Reproduce:
1. Enable feature gate "BlockVolume"
2. Create a StorageClass
$ oc export sc standard 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
  creationTimestamp: null
  name: standard
parameters:
  fstype: xfs
provisioner: kubernetes.io/cinder
reclaimPolicy: Delete
3. Create a dynamic provisioning PVC with volumeMode=Block
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: block-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Block
  resources:
    requests:
      storage: 5Gi

Actual results:
PV is provisioned but in "Pending" status.
pvc-8815692a-37b0-11e8-87c0-fa163e9983f0   5Gi        RWO            Delete           Pending     blockvolume/block-pvc                   standard                    14m

PVC is in "Pending" status
block-pvc   Pending                                       standard       15m

No failed event is reported in PVC
Events:
  Type    Reason                 Age   From                         Message
  ----    ------                 ----  ----                         -------
  Normal  ProvisioningSucceeded  1m    persistentvolume-controller  Successfully provisioned volume pvc-8815692a-37b0-11e8-87c0-fa163e9983f0 using kubernetes.io/cinder

Expected results:
If the StorageClass is not fit for BlockVolume, OCP should report error

Master Log:

Node Log (of failed PODs):

PV Dump:
# oc describe pv pvc-8815692a-37b0-11e8-87c0-fa163e9983f0
Name:            pvc-8815692a-37b0-11e8-87c0-fa163e9983f0
Labels:          failure-domain.beta.kubernetes.io/zone=nova
Annotations:     kubernetes.io/createdby=cinder-dynamic-provisioner
                 pv.kubernetes.io/bound-by-controller=yes
                 pv.kubernetes.io/provisioned-by=kubernetes.io/cinder
StorageClass:    standard
Status:          Pending
Claim:           blockvolume/block-pvc
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        5Gi
Message:         
Source:
    Type:      Cinder (a Persistent Disk resource in OpenStack)
    VolumeID:  4818f911-195d-4227-aa33-148780aadfca
    FSType:    xfs
    ReadOnly:  false
Events:        <none>

PVC Dump:

StorageClass Dump (if StorageClass used by PV/PVC):

Additional info:

Comment 1 Jan Safranek 2018-04-04 10:30:02 UTC
I don't think we have Kubernetes 1.10 in OpenShift yet, so it makes testing of 3.10 features a bit useless. Please wait until `oc version` tells you that kubernetes 1.10 is there.

I wonder why do we have 3.10 builds without Kubernetes 1.10 there. It's very confusing.

Comment 2 Hemant Kumar 2018-04-04 13:57:53 UTC
Yeah this huge PR https://github.com/openshift/origin/pull/19137 that will bring openshift in line with 1.10 is yet to merge. We should wait for that before reporting bugs against 3.10

Comment 4 Qin Ping 2018-04-27 03:35:09 UTC
This bug still exist in:
oc v3.10.0-0.29.0
openshift v3.10.0-0.29.0
kubernetes v1.10.0+b81c8f8

So reopen it.

StorageClass Dump:
# oc export sc glusterprovisioner
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  creationTimestamp: null
  name: glusterprovisioner
parameters:
  clusterid: a6bc848c2ecec1bdc88e65f8d0e72894
  resturl: http://****
  restuser: test
  restuserkey: test
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
volumeBindingMode: Immediate

Dump PVC:
# cat pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: block-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Block
  resources:
    requests:
      storage: 1Gi

# oc get pvc
NAME        STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS         AGE
block-pvc   Pending                                       glusterprovisioner   10m

# oc describe pvc block-pvc 
Name:          block-pvc
Namespace:     piqin
StorageClass:  glusterprovisioner
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/glusterfs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Block
Events:
  Type    Reason                 Age   From                         Message
  ----    ------                 ----  ----                         -------
  Normal  ProvisioningSucceeded  10m   persistentvolume-controller  Successfully provisioned volume pvc-42111613-49ca-11e8-bd65-fa163e3e5ed4 using kubernetes.io/glusterfs

# oc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                 STORAGECLASS         REASON    AGE
pvc-42111613-49ca-11e8-bd65-fa163e3e5ed4   1Gi        RWO            Delete           Pending   piqin/block-pvc       glusterprovisioner             8m
regpv-volume                               17G        RWX            Retain           Bound     default/regpv-claim                                  1h

Comment 5 hchen 2018-05-01 15:20:17 UTC
Use https://bugzilla.redhat.com/show_bug.cgi?id=1573520 to track glusterfs issue

Comment 6 Qin Ping 2018-05-29 03:11:36 UTC
Reopen this bug, for it still exist in openshift:
oc v3.10.0-0.53.0
openshift v3.10.0-0.53.0
kubernetes v1.10.0+b81c8f8

# oc get pvc
NAME      STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc1      Pending                                       standard       5m

# oc get pvc -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    annotations:
      volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/cinder
    creationTimestamp: 2018-05-29T03:03:07Z
    finalizers:
    - kubernetes.io/pvc-protection
    name: pvc1
    namespace: mytest
    resourceVersion: "5216"
    selfLink: /api/v1/namespaces/mytest/persistentvolumeclaims/pvc1
    uid: cfe24821-62ec-11e8-aa36-fa163e9d9932
  spec:
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 1Gi
    storageClassName: standard
    volumeMode: Block    <-------- volumeMode is Block
  status:
    phase: Pending
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

# oc describe pvc
Events:
  Type    Reason                 Age   From                         Message
  ----    ------                 ----  ----                         -------
  Normal  ProvisioningSucceeded  6m    persistentvolume-controller  Successfully provisioned volume pvc-cfe24821-62ec-11e8-aa36-fa163e9d9932 using kubernetes.io/cinder


# oc get pv pvc-cfe24821-62ec-11e8-aa36-fa163e9d9932 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    kubernetes.io/createdby: cinder-dynamic-provisioner
    pv.kubernetes.io/bound-by-controller: "yes"
    pv.kubernetes.io/provisioned-by: kubernetes.io/cinder
  creationTimestamp: 2018-05-29T03:03:07Z
  finalizers:
  - kubernetes.io/pv-protection
  labels:
    failure-domain.beta.kubernetes.io/zone: nova
  name: pvc-cfe24821-62ec-11e8-aa36-fa163e9d9932
  resourceVersion: "5217"
  selfLink: /api/v1/persistentvolumes/pvc-cfe24821-62ec-11e8-aa36-fa163e9d9932
  uid: d0179dc3-62ec-11e8-aa36-fa163e9d9932
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  cinder:
    fsType: xfs
    volumeID: 5c1a6dc1-4312-407d-bdb6-57b6071ae4f7
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: pvc1
    namespace: mytest
    resourceVersion: "5216"
    uid: cfe24821-62ec-11e8-aa36-fa163e9d9932
  persistentVolumeReclaimPolicy: Delete
  storageClassName: standard
  volumeMode: Filesystem  <------ volumeMode is Filesystem, so it can not be bound to PVC
status:
  phase: Pending

Comment 8 Jan Safranek 2018-05-29 13:38:56 UTC
I went through all internal volume plugins and fixed those that could be fixed in https://github.com/kubernetes/kubernetes/pull/64447

AWS, GCE, Ceph RBD, Azure DD and vSphere provisioners will support block PV provisioning

All the others (e.g. Cinder!) won't support support block PV provisioning because the plugin itself does not support block PVs yet. I filled bug #1583685 for it.

Please create separate bug(s) for external provisioners that should support block PVs, I noticed just Gluster + iSCSI - is that all?

Comment 11 Jan Safranek 2018-06-08 11:42:10 UTC
One change regarding comment #8: Azure DD does not support block volumes in 3.10, so it won't provision them there. Re-check in 3.11.

Comment 12 Jan Safranek 2018-06-21 13:23:47 UTC
Opened 3.10.0 PR: https://github.com/openshift/origin/pull/20058

Comment 14 Qin Ping 2018-07-09 06:05:57 UTC
verified in:
oc v3.10.14
dopenshift v3.10.14
kubernetes v1.10.0+b81c8f8

# uname -a
Linux qe-piqin-master-etcd-1 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/redhat-release 
Red Hat Enterprise Linux Atomic Host release 7.5

Comment 16 errata-xmlrpc 2018-07-30 19:11:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1816


Note You need to log in before you can comment on or make changes to this bug.