Description of problem: The reconfigure option inode-lru-limit used to change the lru limit of the inode table of the brick process does not actually change the lru list of the lru limit value maintained by the inode table (nor does it purge the extra inodes from the inode table if the new value set is less than previous value). It just changes the value of the protocol/server's lru limit variable present in its private structure. To see the new value getting affected the brick process has to be restarted. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: lru limit of the inode table and lru list of the inode table is not changed. Expected results: The new value set should dynamically make changes effective. Additional info:
REVIEW: http://review.gluster.org/7957 (protocol/server: reflect lru limit in inode table alss) posted (#1) for review on master by Raghavendra Bhat (raghavendra)
REVIEW: http://review.gluster.org/7957 (protocol/server: reflect lru limit in inode table also) posted (#2) for review on master by Raghavendra Bhat (raghavendra)
REVIEW: http://review.gluster.org/7957 (protocol/server: reflect lru limit in inode table also) posted (#3) for review on master by Raghavendra Bhat (raghavendra)
REVIEW: http://review.gluster.org/7957 (protocol/server: reflect lru limit in inode table also) posted (#4) for review on master by Raghavendra Bhat (raghavendra)
REVIEW: http://review.gluster.org/7957 (protocol/server: reflect lru limit in inode table also) posted (#5) for review on master by Raghavendra Bhat (raghavendra)
REVIEW: http://review.gluster.org/7957 (protocol/server: reflect lru limit in inode table also) posted (#6) for review on master by Raghavendra Bhat (raghavendra)
COMMIT: http://review.gluster.org/7957 committed in master by Raghavendra G (rgowdapp) ------ commit 6ba178fd9ebf9fc98415c30bcd338a68ee5eb601 Author: Raghavendra Bhat <raghavendra> Date: Tue Jun 3 00:28:08 2014 +0530 protocol/server: reflect lru limit in inode table also Upon reconfigure, when lru limit of the inode table is changed, the new value was just saved in the private structure of the protocol/server xlator and the inode table used to have the older values still. A brick start was required for the changes to get reflected. To handle it, traverse through the xlator tree and check whether a xlator is a bound_xl or not (if it is a bound_xl it would have its itable pointer set). If a xlator is a bound_xl, then get the inode table of that bound_xl and set its lru limit to new value given via cli. Also prune the inode table so that extra inodes are purged from the inode table. Change-Id: I6909be028c116adaa1d1a5108470015b5fc6f09d BUG: 1103756 Signed-off-by: Raghavendra Bhat <raghavendra> Reviewed-on: http://review.gluster.org/7957 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Raghavendra G <rgowdapp> Tested-by: Raghavendra G <rgowdapp>
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED. Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html [2] http://supercolony.gluster.org/pipermail/gluster-users/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report. glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html [2] http://supercolony.gluster.org/mailman/listinfo/gluster-users