Description of problem: By default server inode table size is 16K, when upcall is enabled, there is going to be too many forgets sent on inodes as the brick can hold only 16K inodes in memory, so we increased this to 50K. This is still less than the client inode table size. We have seen performance improvement when server inode table size is set to 200000(almost as client inode table size). Hence changing the value to 200000 is beneficial. Increasing this increases the memory consumption by <1MB. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/19744 (extras/group: Change the server inode table size when upcall is on) posted (#4) for review on master by Poornima G
COMMIT: https://review.gluster.org/19744 committed in master by "Poornima G" <pgurusid> with a commit message- extras/group: Change the server inode table size when upcall is on By default server inode table size is 16K, when upcall is enabled, there is going to be too many forgets sent on inodes as the brick can hold only 16K inodes in memory, so we increased this to 50K. This is still less than the client inode table size. We have seen performance improvement when server inode table size is set to 200000(almost as client inode table size). Hence changing the value to 200000. Increasing this increases the memory consumption by <1MB. BUG: 1559235 Change-Id: I931db965cd34bf33094328541bd5a633b3357805 Signed-off-by: Poornima G <pgurusid>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report. glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html [2] https://www.gluster.org/pipermail/gluster-users/