Red Hat Bugzilla – Bug 1478046
/etc/sysconfig/gluster-block file, which defines 'GB_GLFS_LRU_COUNT' value, should be persistent in RHGS image
Last modified: 2017-10-11 03:12:11 EDT
Description of problem:
As part of fix for bz# 1456231 & 1196020, GB_GLFS_LRU_COUNT value which is defined in /etc/sysconfig/gluster-block file is set to a default value of 5. This means 5+1 block-hosting volumes can be created safely without hitting 1456231 & 1196020. If more volumes needs to be created, this configuration value has to be changed. As the file resides in the container, restart of container resets the file and any configuration changes made to it.
Either the configuration file has to be put elsewhere? (I'm not sure if that's a right way) or we have to persist this file.
Version-Release number of selected component (if applicable):
With this configuration we can support "3.5T" disk space for gluster block volumes without any change. The customer may create different size of volumes in their setup, ideally it should be good enough, but if we need more volumes then we will in trouble. So for safer side, we need to take this bug for cns.
(In reply to Humble Chirammal from comment #2)
> With this configuration we can support "3.5T" disk space for gluster block
> volumes without any change. The customer may create different size of
> volumes in their setup, ideally it should be good enough, but if we need
> more volumes then we will in trouble. So for safer side, we need to take
> this bug for cns.
As per discussion in #cns chat, we need to pass a higher count of this VARIABLE to the gluster-block executable.
GB_GLFS_LRU_COUNT env. variable added:
GB_GLFS_LRU_COUNT value is now set in build - cns-deploy-5.0.0-34.el7rhgs.x86_64
[root@dhcp46-207 ~]# oc rsh glusterfs-jp160
sh-4.2# echo $GB_GLFS_LRU_COUNT
Moving the bug to verified.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.