Description of problem: On any Gluster volume, enable profiling and then set a value for diagnostics.stats-dump-interval, post this set the value for diagnostics.stats-dump-interval back to 0. This does not terminate the internal iostats dump thread, rather the thread hardly sleeps and keeps dumping information in a tight loop. Further, post setting the value to 0, if this is set to another non-zero value, a new dump thread is created. IOW, the dump thread is never terminated in a running gluster process with io-stats in the graph. Thread presence can be checked using gstack <PID> | grep dump to see the _ios_dump_thread in one or more process stacks. How reproducible: Always Steps to Reproduce: (in Description)
REVIEW: https://review.gluster.org/20465 (io-stats: Terminate dump thread when dump interval is set to 0) posted (#1) for review on master by Shyamsundar Ranganathan
COMMIT: https://review.gluster.org/20465 committed in master by "Shyamsundar Ranganathan" <srangana> with a commit message- io-stats: Terminate dump thread when dump interval is set to 0 _ios_dump_thread is not terminated by the function _ios_destroy_dump_thread when the diagnostic interval is set to 0 (which means disable auto dumping). During reconfigure, if the value changes from 0 to another then the thread is started, but on reconfiguring this to 0 the thread is not being terminated. Further, if the value is changed from 0 to X to 0 to Y, where X and Y are 2 arbitrary duration numbers, the reconfigure code ends up starting one more thread (for each change from 0 to a valid interval). This patch fixes the same by terminating the thread when the value changes from non-zero to 0. NOTE: It would seem nicer to use conf->dump_thread and check its value for thread presence etc. but there is no documented invalid value for the same, and hence an invalid check is not feasible, thus introducing a new running bool to determine the same. Fixes: bz#1598548 Change-Id: I3e7d2ce8f033879542932ac730d57bfcaf14af73 Signed-off-by: ShyamsundarR <srangana>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/