Description of problem: Given CNS and gluster-block provisioner is ready, when a PVC is created requesting gluster block storage, the provisioner does not understand it if the requested storage is suffixed with a unit. i.e, in pvc.spec.resources.requests.storage, the number '1' works, but '1Gi' does not. The log generally shows 'No space' which is misleading. Version-Release number of selected component (if applicable): openshift v3.7.0-0.158.0 cns-deploy-5.0.0-54.el7rhgs.x86_64 heketi-client-5.0.0-16.el7rhgs.x86_64 How reproducible: Always Steps to Reproduce: 1. Deploy CNS with gluster-block provisioner 2. Create StorageClass and PVC apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gluster-block provisioner: gluster.org/glusterblock parameters: resturl: "http://172.30.209.1:8080" restuser: "admin" restauthenabled: "false" clusterids: "24506bb2b5c282b8ecf4ad7d39f98e8a" chapauthenabled: "true" kind: PersistentVolumeClaim apiVersion: v1 metadata: name: claim1 annotations: volume.beta.kubernetes.io/storage-class: gluster-block spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi 3. oc describe pvc claim1 .... 6m 1m 10 gluster.org/glusterblock c8b6faae-b7c1-11e7-a020-0a580a820013 Warning ProvisioningFailed Failed to provision volume with StorageClass "gluster-block": failed to create volume: [heketi] error creating volume No space 4. Recreate PVC, with capacity unit 'Gi' removed kind: PersistentVolumeClaim apiVersion: v1 metadata: name: claim1 annotations: volume.beta.kubernetes.io/storage-class: gluster-block spec: accessModes: - ReadWriteOnce resources: requests: storage: 1 Actual results: 4. PV provisioned successfully 15m 15m 1 gluster.org/glusterblock c8b6faae-b7c1-11e7-a020-0a580a820013 Normal ProvisioningSucceeded Successfully provisioned volume pvc-fcbbaa37-b7c9-11e7-b3a2-0050569f42d9 Expected results: In step 3, PV could be provisioned successfully Additional info: Traced heketi pod and found: ``` [heketi] INFO 2017/10/23 08:00:00 Creating block volume 8b99f7bc557076f08fe3f1082b3e38e6 [heketi] WARNING 2017/10/23 08:00:00 Free size is lesser than the block volume requested [heketi] INFO 2017/10/23 08:00:00 No block hosting volumes found in the cluster list [heketi] INFO 2017/10/23 08:00:00 brick_num: 0 [negroni] Started GET /queue/5848552f7ec0106f69da1b753166de12 [negroni] Completed 200 OK in 40.394µs [heketi] INFO 2017/10/23 08:00:00 brick_num: 0 [heketi] INFO 2017/10/23 08:00:00 brick_num: 0 [heketi] INFO 2017/10/23 08:00:00 brick_num: 0 [heketi] ERROR 2017/10/23 08:00:00 /src/github.com/heketi/heketi/apps/glusterfs/block_volume_entry.go:58: Failed to create Block Hosting Volume: No space [asynchttp] INFO 2017/10/23 08:00:00 asynchttp.go:129: Completed job 5848552f7ec0106f69da1b753166de12 in 189.518465ms [heketi] ERROR 2017/10/23 08:00:00 /src/github.com/heketi/heketi/apps/glusterfs/app_block_volume.go:83: Failed to create block volume: No space [negroni] Started GET /queue/5848552f7ec0106f69da1b753166de12 [negroni] Completed 500 Internal Server Error in 90.146µs ``` Actually this is not caused by space incapacity, because when you run `heketi-cli --server http://172.30.209.1:8080 --user admin volume create --block --size 1Gi`, you get `Error: invalid argument "1Gi" for "--size" flag: strconv.ParseInt: parsing "1Gi": invalid syntax`
Updates: In 3.9, the actual capacity is 1Gi greater than wanted capacity when using a *glusterfs* provisioner. Replacing '1Gi' with '1' then it works correct. # oc get pvc glusterfs -o yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-class: glusterfs volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs creationTimestamp: 2018-01-05T02:40:52Z name: glusterfs namespace: storage-project resourceVersion: "95634" selfLink: /api/v1/namespaces/storage-project/persistentvolumeclaims/glusterfs uid: d8c8e043-f1c1-11e7-9c0c-0050569f5abb spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi volumeName: pvc-d8c8e043-f1c1-11e7-9c0c-0050569f5abb status: accessModes: - ReadWriteOnce capacity: storage: 2G phase: Bound
There was a inconsistency between the storage calculation wrt `G Vs Gi`. This inconsistencies are fixed in different layers starting from gluster-block For ex: Gluster Block: https://review.gluster.org/#/c/19027/ Heketi: https://github.com/heketi/heketi/pull/935 Provisioner: https://github.com/kubernetes-incubator/external-storage/pull/496 These fixes will be part of CNS 3.9 release, I am also checking we can get these fixes in CNS 3.7.
*** Bug 1537461 has been marked as a duplicate of this bug. ***
This is fixed in latest gluster-block provisioner container: rhgs-gluster-block-prov-container-3.3.1-1 and cns-deploy-6.0.0-2.el7rhgs
Based on comment 14 and comment 15, moving the bug to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:0642