Description of problem:
glusterd is leaking memory when issused "gluster vol status tasks" continuosly for 12 hours. The memory increase is from 250MB to 1.1GB. The increase have been 750 MB.
Version-Release number of selected component (if applicable):
glusterfs-3.12.2
How reproducible:
1/1
Steps to Reproduce:
1. On a six node cluster with brick-multiplexing enabled
2. Created 150 disperse volumes and 250 replica volumes and started them
3. Taken memory footprint from all the nodes
4. Issued "while true; do gluster volume status all tasks; sleep 2; done" with a time gap of 2 seconds
Actual results:
Seen a memory increase of glusterd on Node N1 from 260MB to 1.1GB
Expected results:
glusterd memory shouldn't leak
Root cause:
There's a leak of a key setting in the dictionary priv->glusterd_txn_opinfo in every volume status all transaction when cli fetches the list of volume names as part of the first transaction.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report.
glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.
[1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html
[2] https://www.gluster.org/pipermail/gluster-users/