Hide Forgot
Description of problem: When using the OpenShift and OpenStack integration, OpenShift creates the PV and Cinder creates the volume, but the PVC hangs and never becomes bound. Version-Release number of selected component (if applicable): OpenShift Enterprise 3.1 How reproducible: Every time I run it. Steps to Reproduce: 1. Setup OpenShift and OpenStack integration 2. Create a PVC: cat pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: dynamic-test010 annotations: volume.alpha.kubernetes.io/storage-class: "foo" spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi oc create -f pvc.yaml persistentvolumeclaim "dynamic-test010" created 3. Watch PV get created: oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv-cinder-t39kl <none> 1Gi RWO Bound default/dynamic-test010 1m 4. Watch Cinder Volume get created cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | 428a9b7b-0f9d-4353-bf9a-b40c5a3b9a2a | available | - | 1 | - | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ Actual results: PVC stays in Pending state forever: oc get pvc NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE dynamic-test010 <none> Pending Expected results: I would expect the PVC to become status Bound Additional info: Here is a video: https://bluejeans.com/s/9wai/
This did not happen on 3.2.
Scott, I need output of `oc get pv -o yaml` to see what's wrong. It seems that the provisioner did its job and as a result there should be PV object in Kubernetes. Also, logs from openshift-master would be nice too.
There is a PV object getting created. Strangely, it shows mounted in the 'get pv' but not in the 'get pvc' output: # oc get pv -o yaml apiVersion: v1 items: - apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: cinder-dynamic-provisioner volume.alpha.kubernetes.io/storage-class: foo volume.experimental.kubernetes.io/provisioning-required: volume.experimental.kubernetes.io/provisioning-completed creationTimestamp: 2016-04-25T22:03:55Z generateName: pv-cinder- name: pv-cinder-5t74c resourceVersion: "7804" selfLink: /api/v1/persistentvolumes/pv-cinder-5t74c uid: 9a79ca10-0b31-11e6-acc5-fa163edf1a92 spec: accessModes: - ReadWriteOnce capacity: storage: 1Gi cinder: fsType: ext4 volumeID: 9bd709eb-fc84-4fa6-8a04-977d96237804 claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: dynamic-test010 namespace: default resourceVersion: "7799" uid: 9a789b0d-0b31-11e6-acc5-fa163edf1a92 persistentVolumeReclaimPolicy: Delete status: phase: Bound kind: List metadata: {}
(In reply to Scott McCarty from comment #3) > There is a PV object getting created. Strangely, it shows mounted in the > 'get pv' but not in the 'get pvc' output: > > # oc get pv -o yaml > apiVersion: v1 > items: > - apiVersion: v1 > kind: PersistentVolume > metadata: > annotations: > kubernetes.io/createdby: cinder-dynamic-provisioner > volume.alpha.kubernetes.io/storage-class: foo > volume.experimental.kubernetes.io/provisioning-required: > volume.experimental.kubernetes.io/provisioning-completed > creationTimestamp: 2016-04-25T22:03:55Z > generateName: pv-cinder- > name: pv-cinder-5t74c > resourceVersion: "7804" > selfLink: /api/v1/persistentvolumes/pv-cinder-5t74c > uid: 9a79ca10-0b31-11e6-acc5-fa163edf1a92 > spec: > accessModes: > - ReadWriteOnce > capacity: > storage: 1Gi > cinder: > fsType: ext4 > volumeID: 9bd709eb-fc84-4fa6-8a04-977d96237804 > claimRef: > apiVersion: v1 > kind: PersistentVolumeClaim > name: dynamic-test010 > namespace: default > resourceVersion: "7799" > uid: 9a789b0d-0b31-11e6-acc5-fa163edf1a92 > persistentVolumeReclaimPolicy: Delete > status: > phase: Bound > kind: List > metadata: {} Also, after I reboot the machine, the pvc gets mounted correctly, very strange...
Also, I noticed that the Cinder volume is never getting attached to the node. +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | 5b218961-a68b-4f7a-afda-cb51668fe466 | creating | - | 1 | - | false | | | acfb6248-88c7-4391-8682-847e07a1019b | available | - | 1 | - | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
We have re-written the volume binder to eliminate some similar race conditions in 3.2+ and this issue should no longer be present.. Closing this issue since we've tested CINDER provision and binding and it works as expected. If you can reproduce this on 3.2+ please re-open