Bug 1478046

Summary: /etc/sysconfig/gluster-block file, which defines 'GB_GLFS_LRU_COUNT' value, should be persistent in RHGS image
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: krishnaram Karthick <kramdoss>
Component: CNS-deploymentAssignee: Saravanakumar <sarumuga>
Status: CLOSED ERRATA QA Contact: krishnaram Karthick <kramdoss>
Severity: high Docs Contact:
Priority: unspecified    
Version: cns-3.6CC: akhakhar, annair, asrivast, hchiramm, jarrpa, madam, mliyazud, mzywusko, pprakash, rhs-bugs, rreddy, rtalur, sarumuga
Target Milestone: ---   
Target Release: CNS 3.6   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: cns-deploy-5.0.0-29.el7rhgs Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-10-11 07:12:11 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1445448    

Description krishnaram Karthick 2017-08-03 13:29:12 UTC
Description of problem:
As part of fix for bz# 1456231 & 1196020, GB_GLFS_LRU_COUNT value which is defined in /etc/sysconfig/gluster-block file is set to a default value of 5. This means 5+1 block-hosting volumes can be created safely without hitting 1456231 & 1196020. If more volumes needs to be created, this configuration value has to be changed. As the file resides in the container, restart of container resets the file and any configuration changes made to it.

Either the configuration file has to be put elsewhere? (I'm not sure if that's a right way) or we have to persist this file.

Version-Release number of selected component (if applicable):
cns-deploy-5.0.0-12.el7rhgs.x86_64

Comment 2 Humble Chirammal 2017-08-29 11:29:14 UTC
With this configuration we can support "3.5T" disk space for gluster block volumes without any change. The customer may create different size of volumes in their setup, ideally it should be good enough, but if we need more volumes then we will in trouble. So for safer side, we need to take this bug for cns.

Comment 3 Humble Chirammal 2017-08-31 07:47:05 UTC
(In reply to Humble Chirammal from comment #2)
> With this configuration we can support "3.5T" disk space for gluster block
> volumes without any change. The customer may create different size of
> volumes in their setup, ideally it should be good enough, but if we need
> more volumes then we will in trouble. So for safer side, we need to take
> this bug for cns.

As per discussion in #cns chat, we need to pass a higher count of this VARIABLE to the gluster-block executable.

Comment 6 Saravanakumar 2017-09-01 08:33:39 UTC
GB_GLFS_LRU_COUNT env. variable added:
https://github.com/gluster/gluster-kubernetes/pull/335

Comment 8 krishnaram Karthick 2017-09-12 02:44:30 UTC
GB_GLFS_LRU_COUNT value is now set in build - cns-deploy-5.0.0-34.el7rhgs.x86_64

[root@dhcp46-207 ~]# oc rsh glusterfs-jp160
sh-4.2# 
sh-4.2# 
sh-4.2# echo $GB_GLFS_LRU_COUNT
15

Moving the bug to verified.

Comment 10 errata-xmlrpc 2017-10-11 07:12:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:2881