Description of problem: To find a compatible brick ignore diagnostics.brick-log-level option while brick mux is enabled. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1.setup 2 volumes and enable brick mux 2.Enable diagnostics.brick-log-level DEBUG for anyone volume 3. Start both volumes Actual results: Both volumes are getting separate brick process because glusted compares brick option at the time of attaching a brick with already running brick.If it founds any difference then it starts a brick as a separate process. Expected results: Both volumes should get same pid because there is no impact on functionality after enabling DEBUG for log-level for any volume. Additional info:
REVIEW: https://review.gluster.org/20487 (glusterd: To find a compatible brick ignore brick-log option) posted (#1) for review on master by MOHIT AGRAWAL
COMMIT: https://review.gluster.org/20487 committed in master by "Atin Mukherjee" <amukherj> with a commit message- glusterd: To find a compatible brick ignore diagnostics.brick-log-level option Problem: glusterd start a volume as a separate process instead of attaching with the already running process if volume option has different brick-log-level. There is no functionality impact on a brick if the option has different brick-log-level so glusterd should attach a brick with the already running process. Solution: Ignore brick-log-level option in unsafe_option BUG: 1599628 Change-Id: I72638ff2026fcd9332bc38e1144b1ef4a708820b fixes: bz#1599628 Signed-off-by: Mohit Agrawal <moagrawal>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/