In brick_mux environment a shd process consume high memory. After print the statedump i have found it allocates 1M per afr xlator for all bricks.In case of configure 4k volumes it consumes almost total 6G RSS size in which 4G consumes by inode_tables [cluster/replicate.test1-replicate-0 - usage-type gf_common_mt_list_head memusage] size=1273488 num_allocs=2 max_size=1273488 max_num_allocs=2 total_allocs=2 In inode_new_table we do allocate memory(1M) for a list of inode and dentry hash. I believe in case of shd we do pass lru_limit size is 1 so we don't need to create a big hash table so optimize inode_table size for shd to reduce memory consumption for shd process.
The upstream committed patch link https://github.com/gluster/glusterfs/issues/1538
Need to backport this patch(inode_change) https://review.gluster.org/#/c/glusterfs/+/22184/ also in downstream at the time of merging previous patch otherwise shd will crash.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:1462