Bug 1505290 - Gluster-block does not understand PVC's storage unit
Summary: Gluster-block does not understand PVC's storage unit
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: kubernetes
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: CNS 3.9
Assignee: Humble Chirammal
QA Contact: Rachael
URL:
Whiteboard:
: 1537461 (view as bug list)
Depends On:
Blocks: 1526414 1537461 1544735 1544743
TreeView+ depends on / blocked
 
Reported: 2017-10-23 08:34 UTC by Jianwei Hou
Modified: 2019-12-02 21:21 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Previously, the gluster-block provisioner did not identify the storage units correctly in the PVC. For example, it would identify 1 as 1GiB by default and the provisioner would fail on 1Gi. With this enhancement, gluster-block provisioner identifies the storage units correctly, ie, 1 will be treated as 1 byte, 1Gi will be treated as 1 GibiByte, and 1Ki will be treated as 1KibiByte.
Clone Of:
: 1537461 1544735 (view as bug list)
Environment:
Last Closed: 2018-04-05 03:25:59 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1544735 0 medium CLOSED Gluster-block does not understand PVC's storage unit 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHEA-2018:0642 0 None None None 2018-04-05 03:26:26 UTC

Internal Links: 1544735

Description Jianwei Hou 2017-10-23 08:34:07 UTC
Description of problem:
Given CNS and gluster-block provisioner is ready, when a PVC is created requesting gluster block storage, the provisioner does not understand it if the requested storage is suffixed with a unit. i.e, in pvc.spec.resources.requests.storage, the number '1' works, but '1Gi' does not. 

The log generally shows 'No space' which is misleading.

Version-Release number of selected component (if applicable):
openshift v3.7.0-0.158.0
cns-deploy-5.0.0-54.el7rhgs.x86_64
heketi-client-5.0.0-16.el7rhgs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Deploy CNS with gluster-block provisioner
2. Create StorageClass and PVC
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: gluster-block
provisioner: gluster.org/glusterblock
parameters:
 resturl: "http://172.30.209.1:8080"
 restuser: "admin"
 restauthenabled: "false"
 clusterids: "24506bb2b5c282b8ecf4ad7d39f98e8a"
 chapauthenabled: "true"

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: claim1
  annotations:
    volume.beta.kubernetes.io/storage-class: gluster-block
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

3. oc describe pvc claim1
....
  6m            1m              10      gluster.org/glusterblock c8b6faae-b7c1-11e7-a020-0a580a820013                   Warning         ProvisioningFailed      Failed to provision volume with StorageClass "gluster-block":  failed to create volume: [heketi] error creating volume No space

4. Recreate PVC, with capacity unit 'Gi' removed
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: claim1
  annotations:
    volume.beta.kubernetes.io/storage-class: gluster-block
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1


Actual results:
4. PV provisioned successfully
 15m           15m             1       gluster.org/glusterblock c8b6faae-b7c1-11e7-a020-0a580a820013                   Normal          ProvisioningSucceeded   Successfully provisioned volume pvc-fcbbaa37-b7c9-11e7-b3a2-0050569f42d9

Expected results:
In step 3, PV could be provisioned successfully

Additional info:
Traced heketi pod and found:
```
[heketi] INFO 2017/10/23 08:00:00 Creating block volume 8b99f7bc557076f08fe3f1082b3e38e6
[heketi] WARNING 2017/10/23 08:00:00 Free size is lesser than the block volume requested
[heketi] INFO 2017/10/23 08:00:00 No block hosting volumes found in the cluster list
[heketi] INFO 2017/10/23 08:00:00 brick_num: 0
[negroni] Started GET /queue/5848552f7ec0106f69da1b753166de12
[negroni] Completed 200 OK in 40.394µs
[heketi] INFO 2017/10/23 08:00:00 brick_num: 0
[heketi] INFO 2017/10/23 08:00:00 brick_num: 0
[heketi] INFO 2017/10/23 08:00:00 brick_num: 0
[heketi] ERROR 2017/10/23 08:00:00 /src/github.com/heketi/heketi/apps/glusterfs/block_volume_entry.go:58: Failed to create Block Hosting Volume: No space
[asynchttp] INFO 2017/10/23 08:00:00 asynchttp.go:129: Completed job 5848552f7ec0106f69da1b753166de12 in 189.518465ms
[heketi] ERROR 2017/10/23 08:00:00 /src/github.com/heketi/heketi/apps/glusterfs/app_block_volume.go:83: Failed to create block volume: No space
[negroni] Started GET /queue/5848552f7ec0106f69da1b753166de12
[negroni] Completed 500 Internal Server Error in 90.146µs
```

Actually this is not caused by space incapacity, because when you run `heketi-cli --server http://172.30.209.1:8080 --user admin volume create --block --size 1Gi`, you get `Error: invalid argument "1Gi" for "--size" flag: strconv.ParseInt: parsing "1Gi": invalid syntax`

Comment 4 Jianwei Hou 2018-01-05 06:32:09 UTC
Updates:

In 3.9, the actual capacity is 1Gi greater than wanted capacity when using a *glusterfs* provisioner. Replacing '1Gi' with '1' then it works correct.

# oc get pvc glusterfs -o yaml                                                                                                                                                   
apiVersion: v1                                                                                                                                                                                 
kind: PersistentVolumeClaim                                                                                                                                                                    
metadata:                                                                                                                                                                                      
  annotations:                                                                                                                                                                                 
    pv.kubernetes.io/bind-completed: "yes"                                                                                                                                                     
    pv.kubernetes.io/bound-by-controller: "yes"                                                                                                                                                
    volume.beta.kubernetes.io/storage-class: glusterfs                                                                                                                                         
    volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs                                                                                                                     
  creationTimestamp: 2018-01-05T02:40:52Z                                                                                                                                                      
  name: glusterfs                                                                                                                                                                              
  namespace: storage-project                                                                                                                                                                   
  resourceVersion: "95634"                                                                                                                                                                     
  selfLink: /api/v1/namespaces/storage-project/persistentvolumeclaims/glusterfs                                                                                                                
  uid: d8c8e043-f1c1-11e7-9c0c-0050569f5abb                                                                                                                                                    
spec:                                                                                                                                                                                          
  accessModes:                                                                                                                                                                                 
  - ReadWriteOnce                                                                                                                                                                              
  resources:                                                                                                                                                                                   
    requests:                                                                                                                                                                                  
      storage: 1Gi                                                                                                                                                                             
  volumeName: pvc-d8c8e043-f1c1-11e7-9c0c-0050569f5abb                                                                                                                                         
status:                                                                                                                                                                                        
  accessModes:                                                                                                                                                                                 
  - ReadWriteOnce                                                                                                                                                                              
  capacity:                                                                                                                                                                                    
    storage: 2G                                                                                                                                                                                
  phase: Bound

Comment 6 Humble Chirammal 2018-01-22 07:58:06 UTC
There was a inconsistency between the storage calculation wrt `G Vs Gi`. This inconsistencies are fixed in different layers starting from gluster-block

For ex:  

Gluster Block:
https://review.gluster.org/#/c/19027/

Heketi:
https://github.com/heketi/heketi/pull/935

Provisioner:
https://github.com/kubernetes-incubator/external-storage/pull/496


These fixes will be part of CNS 3.9 release, I am also checking we can get these fixes in CNS 3.7.

Comment 7 Humble Chirammal 2018-01-24 08:55:23 UTC
*** Bug 1537461 has been marked as a duplicate of this bug. ***

Comment 11 Humble Chirammal 2018-02-22 07:35:52 UTC
This is fixed in latest gluster-block provisioner container: 
rhgs-gluster-block-prov-container-3.3.1-1 and cns-deploy-6.0.0-2.el7rhgs

Comment 17 Rachael 2018-03-09 05:56:35 UTC
Based on comment 14 and comment 15, moving the bug to verified.

Comment 22 errata-xmlrpc 2018-04-05 03:25:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0642


Note You need to log in before you can comment on or make changes to this bug.