Red Hat Bugzilla – Bug 1529501
[shd] : shd occupies ~7.8G in memory while toggling cluster.self-heal-daemon in loop , possibly leaky.
Last modified: 2018-10-25 13:17:28 EDT
Description of problem: ------------------------- As a part of verification of https://bugzilla.redhat.com/show_bug.cgi?id=1526363 , created 300 dist rep volumes Bricks are multiplexed. Then proceeded to do vol set in loop. <snip> for i in {1..300};do gluster volume create butcher$i replica 2 gqas013.sbu.lab.eng.bos.redhat.com:/bricks1/brickA$i gqas016.sbu.lab.eng.bos.redhat.com:/bricks1/brickA$i gqas006.sbu.lab.eng.bos.redhat.com:/bricks1/brickA$i gqas008.sbu.lab.eng.bos.redhat.com:/bricks1/brickA$i gqas003.sbu.lab.eng.bos.redhat.com:/bricks1/brickA$i gqas007.sbu.lab.eng.bos.redhat.com:/bricks1/brickA$i;gluster v start butcher$i;sleep 2;done ; followed by for i in {1..300};do gluster v set butcher$i cluster.self-heal-daemon off;sleep 3 ;gluster v set butcher$i group metadata-cache;sleep 3;gluster v set butcher$i cluster.lookup-optimize on;sleep 3 ;done <snip> Self heal daemon occupies almost 4.6G of Resident space in memory post all volume set operations. **BEFORE VOL SET ** : [root@gqas008 /]# ps aux|grep glus root 8078 12.4 2.6 28807468 1315220 ? Ssl 05:13 0:28 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/e32d8903c5b60efed5cc4e725235c143.socket --xlator-option *replicate*.node-uuid=cedc8e7d-d3a0-47f2-a50e-ebe12fe964bc **AFTER VOL SET ** : [root@gqas008 /]# ps aux|grep glustershd root 8078 3.0 9.4 31756588 4677648 ? Ssl 05:13 3:56 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/e32d8903c5b60efed5cc4e725235c143.socket --xlator-option *replicate*.node-uuid=cedc8e7d-d3a0-47f2-a50e-ebe12fe964bc Mem consumption increased from 1.3G to 4.6G. Since the delta is massive , raising with high prio. Version-Release number of selected component (if applicable): -------------------------------------------------------------- [root@gqas008 /]# rpm -qa|grep glus glusterfs-libs-3.8.4-52.3.el7rhgs.x86_64 glusterfs-server-3.8.4-52.3.el7rhgs.x86_64 How reproducible: ------------------ 2/2 Steps to Reproduce: ------------------- 1. Create lots of volumes of type dist-rep (say,300) 2. Disable self heal in loop Actual results: ---------------- Drastic increase in mem consumption by shd post vol set operations. Expected results: ------------------ Controlled memory consumption by shd.
Development Management has reviewed and declined this request. You may appeal this decision by reopening this request.