Bug 1479001 - only 70GB block device is available to app pod when 75GB is requested
only 70GB block device is available to app pod when 75GB is requested
Status: CLOSED NOTABUG
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: CNS-deployment (Show other bugs)
cns-3.6
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Humble Chirammal
krishnaram Karthick
:
Depends On:
Blocks: 1445448 1479175
  Show dependency treegraph
 
Reported: 2017-08-07 12:52 EDT by krishnaram Karthick
Modified: 2017-08-16 04:57 EDT (History)
12 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1479175 (view as bug list)
Environment:
Last Closed: 2017-08-16 04:57:56 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description krishnaram Karthick 2017-08-07 12:52:28 EDT
Description of problem:
When a 75 GB gluster-block PVC is requested to be consumed for app pod, only 70GB capacity is seen by the app pod. 

GLUSTER-BLOCK:
===============
oc get pvc
NAME               STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
logging-es-0       Bound     pvc-7f24fe19-7b8d-11e7-8464-005056a56b97   75G        RWO           glusterblock   14m
logging-es-1       Bound     pvc-897bfde9-7b8d-11e7-8464-005056a56b97   75G        RWO           glusterblock   14m
logging-es-2       Bound     pvc-93d1815b-7b8d-11e7-8464-005056a56b97   75G        RWO           glusterblock   13m
logging-es-ops-0   Bound     pvc-9f9d77e6-7b8d-11e7-8464-005056a56b97   75G        RWO           glusterblock   13m
logging-es-ops-1   Bound     pvc-aa2ab652-7b8d-11e7-8464-005056a56b97   75G        RWO           glusterblock   13m
logging-es-ops-2   Bound     pvc-b49029eb-7b8d-11e7-8464-005056a56b97   75G        RWO           glusterblock   13m


Filesystem                                                                                      Size  Used Avail Use% Mounted on
/dev/mapper/docker-253:2-6476-003147d301ec2b32853297abebd41cf62c67e715de1c3331999355b1843900e6   10G  472M  9.6G   5% /
tmpfs                                                                                            24G     0   24G   0% /dev
tmpfs                                                                                            24G     0   24G   0% /sys/fs/cgroup
/dev/mapper/mpathc                                                                               70G   36M   70G   1% /elasticsearch/persistent ---> This is the block device

This issue is seen only with gluster-block devices. when PVC is provided by Glusterfs volumes, exact storage capacity is provided.

GLUSTERFS:
===========
sh-4.2# df -h
Filesystem                                                                                      Size  Used Avail Use% Mounted on
/dev/mapper/docker-253:2-6106-c716ebae99e55307a07696f4cf6a91059dea7bb947fceeb7f06b73f5b630b3d2   10G  598M  9.4G   6% /
tmpfs                                                                                            24G     0   24G   0% /dev
tmpfs                                                                                            24G     0   24G   0% /sys/fs/cgroup
/dev/mapper/vg_rhel_dhcp47----104--var-lv_var                                                    59G  614M   59G   2% /etc/hosts
shm                                                                                              64M     0   64M   0% /dev/shm
10.70.46.11:vol_efcc287217f7e520aafb5a63b2f0a3b2                                                 75G  234M   75G   1% /var/lib/mongodb/data ---> GlusterFS volume
tmpfs                                                                                            24G   16K   24G   1% /run/secrets/kubernetes.io/serviceaccount
sh-4.2# 
sh-4.2# 
sh-4.2# 
sh-4.2# exit
[root@dhcp47-57 ~]# 
[root@dhcp47-57 ~]# 
[root@dhcp47-57 ~]# oc get pvc
NAME        STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
mongodb-1   Bound     pvc-65ef97ea-7b8f-11e7-8464-005056a56b97   1Gi        RWO           slow           9m
mongodb-2   Bound     pvc-e5f19c2d-7b8f-11e7-8464-005056a56b97   75Gi       RWO           slow           6m

This is not an acceptable behavior.

Version-Release number of selected component (if applicable):
cns-deploy-5.0.0-12.el7rhgs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. submit a claim request to create 75 GB block device and check the actual capacity provided


Actual results:
<75GB is provided

Expected results:
exactly 75GB has to be provided

Additional info:
Comment 5 krishnaram Karthick 2017-08-09 00:51:01 EDT
Prasanna and me tried to reproduce the issue and we see that a request to create  75G device is passed by heketi as 70G.

This indicates that the issue is not due to gluster-block but by the was request is sent. We'll have to investigate why this is happening.

This behavior is seen on both elasticsearch (default template from OCP) and mongoDB templates. Is this how storage capacity is calculated in OCP?

[root@dhcp47-57 ~]# oc new-app mongo.json --param=DATABASE_SERVICE_NAME=test-krk --param=VOLUME_CAPACITY=75G

snippet of heketi log:
=======================
[kubeexec] DEBUG 2017/08/09 04:39:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:250: Host: dhcp46-11.lab.eng.blr.redhat.com Pod: glusterfs-br2qq Command: gluster-block create vol_bc8159f87fe713da6db71010dca9abe9/blockvol_835a51344be42a1d0b1a3b0efe372246  ha 3 auth enable  10.70.46.11,10.70.47.23,10.70.47.69 70G --json
Result: { "IQN": "iqn.2016-12.org.gluster-block:60e14200-f87c-4eb1-8f9c-b29e7d5d214a", "USERNAME": "60e14200-f87c-4eb1-8f9c-b29e7d5d214a", "PASSWORD": "7cc35794-5450-46ff-b3ec-a04b7e031562", "PORTAL(S)": [ "10.70.46.11:3260", "10.70.47.23:3260", "10.70.47.69:3260" ], "RESULT": "SUCCESS" }
[heketi] INFO 2017/08/09 04:39:41 Created block volume 835a51344be42a1d0b1a3b0efe372246
[asynchttp] INFO 2017/08/09 04:39:41 asynchttp.go:129: Completed job fa72164e6dbe8f56db88624f7995555d in 37.987264117s
[negroni] Started GET /queue/fa72164e6dbe8f56db88624f7995555d
[negroni] Completed 303 See Other in 66.175µs
[negroni] Started GET /blockvolumes/835a51344be42a1d0b1a3b0efe372246
[negroni] Completed 200 OK in 3.025844ms

Note You need to log in before you can comment on or make changes to this bug.