Dynamic provisioning broke upstream: https://github.com/kubernetes/kubernetes/issues/21041 And same issue for AWS. We need this fixed for 3.2
Hi I can created volume on the aws with puddle atomic-openshift-3.1.1.903-1.git.0.91c3aef.el7.x86_64 , could not reproduced right now [root@ip-172-18-2-52 ~]# oc get pv pv-aws-tcnbp -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner volume.alpha.kubernetes.io/storage-class: foo volume.experimental.kubernetes.io/provisioning-required: volume.experimental.kubernetes.io/provisioning-completed creationTimestamp: 2016-02-24T10:02:01Z generateName: pv-aws- name: pv-aws-tcnbp resourceVersion: "4243" selfLink: /api/v1/persistentvolumes/pv-aws-tcnbp uid: a5ee03fd-dadd-11e5-af72-0e5ac392c83d spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: ext4 volumeID: aws://us-east-1d/vol-972f4d34 capacity: storage: 3Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: claim2 namespace: default resourceVersion: "4236" uid: a5ed3613-dadd-11e5-af72-0e5ac392c83d persistentVolumeReclaimPolicy: Delete status: phase: Bound
Fix is upstream: https://github.com/kubernetes/kubernetes/pull/21738
Sorry this isn't "modified" it hasn't merged in openshift yet. moving back to post.
Verified dynamic provisioning on GCE works. # openshift version openshift v3.2.0.4 kubernetes v1.2.0-origin-41-g91d3e75 etcd 2.2.5 [root@lxia-ose32 ~]# oc create -f gce-pvc.json persistentvolumeclaim "claim2" created [root@lxia-ose32 ~]# oc get pv -o yaml apiVersion: v1 items: - apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: gce-pd-dynamic-provisioner volume.alpha.kubernetes.io/storage-class: foo volume.experimental.kubernetes.io/provisioning-required: volume.experimental.kubernetes.io/provisioning-completed creationTimestamp: 2016-03-18T05:22:14Z generateName: pv-gce- name: pv-gce-gl38a resourceVersion: "1274" selfLink: /api/v1/persistentvolumes/pv-gce-gl38a uid: 60203b0a-ecc9-11e5-91ed-42010af0000e spec: accessModes: - ReadWriteOnce capacity: storage: 3Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: claim2 namespace: default resourceVersion: "1263" uid: 5f596c59-ecc9-11e5-91ed-42010af0000e gcePersistentDisk: fsType: ext4 pdName: kube-dynamic-6035b20d-ecc9-11e5-91ed-42010af0000e persistentVolumeReclaimPolicy: Delete status: phase: Bound kind: List metadata: {}
Verified dynamic provisioning on AWS, it works [root@ip-172-18-12-10 ~]# openshift version openshift v3.2.0.4 kubernetes v1.2.0-origin-41-g91d3e75 etcd 2.2.5 1. Create a pv using below json file { "kind": "PersistentVolumeClaim", "apiVersion": "v1", "metadata": { "name": "claim2", "labels": { "name": "testing" }, "annotations": { "volume.alpha.kubernetes.io/storage-class": "foo" } }, "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "3Gi" } } } } 2. Check pvc and pv status [root@ip-172-18-12-10 ~]# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE claim2 Bound pv-aws-hpobe 3Gi RWO 8s 3. Check pv status [root@ip-172-18-12-10 ~]# oc get pv pv-aws-hpobe -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner volume.alpha.kubernetes.io/storage-class: foo volume.experimental.kubernetes.io/provisioning-required: volume.experimental.kubernetes.io/provisioning-completed creationTimestamp: 2016-03-18T09:16:54Z generateName: pv-aws- name: pv-aws-hpobe resourceVersion: "1395" selfLink: /api/v1/persistentvolumes/pv-aws-hpobe uid: 2851d92d-ecea-11e5-9833-0efcb7b98485 spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: ext4 volumeID: aws://us-east-1d/vol-06ec07a4 capacity: storage: 3Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: claim2 namespace: default resourceVersion: "1388" uid: 284dcaf2-ecea-11e5-9833-0efcb7b98485 persistentVolumeReclaimPolicy: Delete status: phase: Bound
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2016:1064