Bug 1534750

Summary: RFE gluster installation: storage class 'glusterblock' is requested
Product: OpenShift Container Platform Reporter: Hongkai Liu <hongkliu>
Component: InstallerAssignee: Jose A. Rivera <jarrpa>
Status: CLOSED ERRATA QA Contact: Johnny Liu <jialiu>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.9.0CC: aos-bugs, ekuric, gucore, hongkliu, jokerman, mifiedle, mmccomas, pprakash
Target Milestone: ---Keywords: RFE
Target Release: 3.10.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-07-30 19:09:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Hongkai Liu 2018-01-15 21:27:05 UTC
Description of problem:
This is more a feature request than a bug report.

After running openshift-ansible/playbooks/openshift-glusterfs/config.yml, we have a sc for glusterfs.

# oc get sc 
NAME                     PROVISIONER                AGE
glusterfs-storage        kubernetes.io/glusterfs    1h
gp2 (default)            kubernetes.io/aws-ebs      7h


This is all good.
However, we have started to test block volume as the feature described here: https://redhatstorage.redhat.com/2017/10/05/container-native-storage-for-the-openshift-masses/

I would be nice if the playbook sets anther storage class up, something like this:
# oc get sc 
NAME                     PROVISIONER                AGE
glusterblock             gluster.org/glusterblock   1h
glusterfs-storage        kubernetes.io/glusterfs    1h
gp2 (default)            kubernetes.io/aws-ebs      7h


Then we can use PVCs with glusterblock for elasticsearch and metrics.

For now, the workaround is to create them manually by following the steps here:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html-single/container-native_storage_for_openshift_container_platform/#Block_Storage

Comment 1 Jose A. Rivera 2018-02-09 15:17:28 UTC
Being worked on through this Trello card: https://trello.com/c/1sInXRsD/602-1-glusterfs-create-glusterblock-storageclass

Comment 2 Jose A. Rivera 2018-02-14 13:33:36 UTC
Merged PR: https://github.com/openshift/openshift-ansible/pull/6922

Comment 3 Hongkai Liu 2018-02-14 18:55:09 UTC
Verified with:

# yum list installed | grep openshift
atomic-openshift.x86_64         3.9.0-0.42.0.git.0.e604ce5.el7

glusterfs_devices=["/dev/xvdf"]
openshift_storage_glusterfs_wipe=true
openshift_storage_glusterfs_image=registry.access.redhat.com/rhgs3/rhgs-server-rhel7
openshift_storage_glusterfs_version=3.3.0-362
openshift_storage_glusterfs_heketi_image=registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7
openshift_storage_glusterfs_heketi_version=3.3.0-364
openshift_hosted_registry_glusterfs_swap=true
openshift_storage_glusterfs_block_deploy=true
openshift_storage_glusterfs_block_image=registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7
openshift_storage_glusterfs_block_version=3.3.0-362
openshift_storage_glusterfs_block_host_vol_size=800
openshift_storage_glusterfs_block_storageclass=true

$ git log --oneline -1
b90ec246f (HEAD -> master, origin/master, origin/HEAD) Merge pull request #7135 from abutcher/node-accept-fail

$ ansible-playbook -i /tmp/2.file openshift-ansible/playbooks/openshift-glusterfs/config.yml

# oc get sc
NAME                      PROVISIONER                AGE
glusterfs-storage         kubernetes.io/glusterfs    5m
glusterfs-storage-block   gluster.org/glusterblock   3m
gp2 (default)             kubernetes.io/aws-ebs      1h


Thanks, Jose.

Comment 4 Hongkai Liu 2018-02-14 20:31:35 UTC
Hi @Jose,

got another question:
When I delete PVC based on glusterfs-storage, the space is released in heketi topology (checking via the following command):

heketi-cli --server http://heketi-storage-glusterfs.apps.0214-1am.qe.rhcloud.com --user admin --secret <secret> topology info | grep -E "Node Id|State"

When I did the same with glusterfs-storage-block, the space is NOT released.

Is this the behavior by design?

Comment 5 Jose A. Rivera 2018-02-14 21:03:21 UTC
In a way, yes.

When a block volume create request comes in, by default a GlusterFS block-hosting volume is created to hold the block volume. The block volume occupies space in the block-hosting volume but it's the block hosting volume that "uses" capacity on the storage devices themselves. When block volumes are deleted their associated block-hosting volumes are left behind in anticipation of future block volumes which will be able to go on those same block-hosting volumes.

There's been some talk about changing this in heketi, but it is not a pressing issue at this time.

Comment 6 Hongkai Liu 2018-02-14 22:07:00 UTC
Let me make sure that I understand this correctly:

"When block volumes are
deleted their associated block-hosting volumes are left behind in anticipation
of future block volumes which will be able to go on those same block-hosting
volumes."

Assume I do this:
request 300g block volume V1 and then delete it.

request 300g block volume V2. So ... Should I expect V2 will use the same space which was assigned to V1?

Comment 7 Jose A. Rivera 2018-02-15 00:23:35 UTC
Correct.

Comment 9 errata-xmlrpc 2018-07-30 19:09:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1816