Bug 1534750
Summary: | RFE gluster installation: storage class 'glusterblock' is requested | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Hongkai Liu <hongkliu> |
Component: | Installer | Assignee: | Jose A. Rivera <jarrpa> |
Status: | CLOSED ERRATA | QA Contact: | Johnny Liu <jialiu> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.9.0 | CC: | aos-bugs, ekuric, gucore, hongkliu, jokerman, mifiedle, mmccomas, pprakash |
Target Milestone: | --- | Keywords: | RFE |
Target Release: | 3.10.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: |
undefined
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2018-07-30 19:09:00 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Hongkai Liu
2018-01-15 21:27:05 UTC
Being worked on through this Trello card: https://trello.com/c/1sInXRsD/602-1-glusterfs-create-glusterblock-storageclass Verified with: # yum list installed | grep openshift atomic-openshift.x86_64 3.9.0-0.42.0.git.0.e604ce5.el7 glusterfs_devices=["/dev/xvdf"] openshift_storage_glusterfs_wipe=true openshift_storage_glusterfs_image=registry.access.redhat.com/rhgs3/rhgs-server-rhel7 openshift_storage_glusterfs_version=3.3.0-362 openshift_storage_glusterfs_heketi_image=registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7 openshift_storage_glusterfs_heketi_version=3.3.0-364 openshift_hosted_registry_glusterfs_swap=true openshift_storage_glusterfs_block_deploy=true openshift_storage_glusterfs_block_image=registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7 openshift_storage_glusterfs_block_version=3.3.0-362 openshift_storage_glusterfs_block_host_vol_size=800 openshift_storage_glusterfs_block_storageclass=true $ git log --oneline -1 b90ec246f (HEAD -> master, origin/master, origin/HEAD) Merge pull request #7135 from abutcher/node-accept-fail $ ansible-playbook -i /tmp/2.file openshift-ansible/playbooks/openshift-glusterfs/config.yml # oc get sc NAME PROVISIONER AGE glusterfs-storage kubernetes.io/glusterfs 5m glusterfs-storage-block gluster.org/glusterblock 3m gp2 (default) kubernetes.io/aws-ebs 1h Thanks, Jose. Hi @Jose, got another question: When I delete PVC based on glusterfs-storage, the space is released in heketi topology (checking via the following command): heketi-cli --server http://heketi-storage-glusterfs.apps.0214-1am.qe.rhcloud.com --user admin --secret <secret> topology info | grep -E "Node Id|State" When I did the same with glusterfs-storage-block, the space is NOT released. Is this the behavior by design? In a way, yes. When a block volume create request comes in, by default a GlusterFS block-hosting volume is created to hold the block volume. The block volume occupies space in the block-hosting volume but it's the block hosting volume that "uses" capacity on the storage devices themselves. When block volumes are deleted their associated block-hosting volumes are left behind in anticipation of future block volumes which will be able to go on those same block-hosting volumes. There's been some talk about changing this in heketi, but it is not a pressing issue at this time. Let me make sure that I understand this correctly: "When block volumes are deleted their associated block-hosting volumes are left behind in anticipation of future block volumes which will be able to go on those same block-hosting volumes." Assume I do this: request 300g block volume V1 and then delete it. request 300g block volume V2. So ... Should I expect V2 will use the same space which was assigned to V1? Correct. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1816 |