Description of problem: In a scaled environment having a large number of volumes when a node or a gluster pod gets restarted, glustershd logs grow unnecessarily because of every restart of bricks in the volume the daemons get restarted which probably can be avoided. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/20439 (glusterd: Don't restart daemon services at every brick restart) posted (#1) for review on master by Atin Mukherjee
(In reply to Atin Mukherjee from comment #0) > Description of problem: > > In a scaled environment having a large number of volumes when a node or a > gluster pod gets restarted, glustershd logs grow unnecessarily because of > every restart of bricks in the volume the daemons get restarted which > probably can be avoided. Please ignore the above description as I realize that the current code already handles this part but in not a cleaned manner. We need to restart the daemon at the end of starting all the bricks, not in between. > > > Version-Release number of selected component (if applicable): > > > How reproducible: > > > Steps to Reproduce: > 1. > 2. > 3. > > Actual results: > > > Expected results: > > > Additional info:
COMMIT: https://review.gluster.org/20439 committed in master by "Atin Mukherjee" <amukherj> with a commit message- glusterd: start the services after all the bricks are up glusterd_svcs_manager () should be called post starting all the volumes at one go. Change-Id: I838cc50c29f3930a483aa9671958cdc186904030 Fixes: bz#1597247 Signed-off-by: Atin Mukherjee <amukherj>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/