Description of problem: Before the fix: pk@localhost - /var/run/gluster 14:55:45 :( ⚡ sudo grep ioc_t glusterdump.31395.dump.1526458170 [performance/io-cache.r3-io-cache - usage-type gf_ioc_mt_ioc_table_t memusage] [performance/io-cache.r3-io-cache - usage-type gf_ioc_mt_ioc_table_t memusage] After the fix: pk@localhost - /var/run/gluster 14:55:47 :) ⚡ sudo grep ioc_t glusterdump.11980.dump.1526460969 [performance/io-cache.r3-io-cache - usage-type gf_ioc_mt_ioc_table_t memusage] Csaba found that statedump of fusemount has two instances of each of the mem-accounting information. On debugging, I realized that statedump is called for both ctx->master and ctx->active. Since ctx->active is a sub-graph of ctx->master, there are duplicate entries. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/20027 (statedump: Prevent duplicate statedump for master and active) posted (#1) for review on master by Pranith Kumar Karampuri
COMMIT: https://review.gluster.org/20027 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- statedump: Prevent duplicate statedump for master and active Csaba found that statedump of fusemount has two instances of each of the mem-accounting information. On debugging, I realized that statedump is called for both ctx->master and ctx->active. Since ctx->active is a sub-graph of ctx->master, there are duplicate entries. Fixed this part to prevent duplication in this patch. fixes bz#1578721 BUG: 1578721 Change-Id: I5a63b4f5933d4d720ac010c58e6dee3b27067d42 Signed-off-by: Pranith Kumar K <pkarampu>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/