Description of problem: With brick multiplexing on, when volume creation and deletion was run continuously for ~12 hours, glusterfsd process on each of the three nodes consumes close to 14gb of memory with a single volume in the system. This is quite high. Please note that throughout the test, heketidb volume is not deleted and hence the same brick process remain throughout the test. Version-Release number of selected component (if applicable): sh-4.2# rpm -qa | grep 'gluster' glusterfs-libs-3.8.4-54.el7rhgs.x86_64 glusterfs-3.8.4-54.el7rhgs.x86_64 glusterfs-api-3.8.4-54.el7rhgs.x86_64 glusterfs-cli-3.8.4-54.el7rhgs.x86_64 glusterfs-fuse-3.8.4-54.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-54.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-54.el7rhgs.x86_64 glusterfs-server-3.8.4-54.el7rhgs.x86_64 gluster-block-0.2.1-14.el7rhgs.x86_64 How reproducible: Always Steps to Reproduce: 1. on a CNS setup, run the following script for 12 hours. while true; do for i in {1..5}; do heketi-cli volume create --size=1; done; heketi-cli volume list | awk '{print $1}' | cut -c 4- >> vollist; while read i; do heketi-cli volume delete $i; sleep 2; done<vollist; rm vollist; done Actual results: glusterfsd process consumes ~14 gb with 1 volume Expected results: typically, glusterfsd would consume < 1gb for a volume Additional info:
Build : 3.12.2-8 On a three node brick-mux enabled setup. Created a base 2X3 volume. In a loop created two volumes, started, stopped and deleted. Did it for 3500 iterations. At the end glusterfsd memory increased to 2.2G earlier it went to 14G. Compared to it memleak reduced very much. As discussed with mohit, it is acceptable. Here is the output of the memory consumption. ###############1 iteration ############# total used free shared buff/cache available Mem: 7.6G 246M 7.2G 8.8M 207M 7.1G Swap: 2.0G 0B 2.0G PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1614 root 20 0 606860 9040 4112 S 0.0 0.1 0:00.44 glusterd 2325 root 20 0 1869420 21684 3684 S 0.0 0.3 0:00.09 glusterfsd ###############3613 iteration ############# total used free shared buff/cache available Mem: 7.6G 2.5G 4.1G 88M 1.0G 4.7G Swap: 2.0G 0B 2.0G PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1614 root 20 0 615440 43916 4404 S 0.0 0.5 11:38.95 glusterd 2325 root 20 0 86.9g 2.2g 4256 S 0.0 28.8 11:38.02 glusterfsd Hence marking it as verified.
*** Bug 1619369 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607