Description of problem: Now that some of the users have confirmed rebalance works fine in VM store environments when parallel IO is going on, and sas has also confirmed the same, time to revert the CLI restrictions wrt running rebalance on sharded volumes. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/17506 (Revert "glusterd: disallow rebalance & remove-brick on a sharded volume") posted (#1) for review on master by Krutika Dhananjay (kdhananj)
COMMIT: https://review.gluster.org/17506 committed in master by Atin Mukherjee (amukherj) ------ commit c0d4081cf4b90a4316b786cc53263a7c56fdb344 Author: Krutika Dhananjay <kdhananj> Date: Mon Jun 12 11:17:01 2017 +0530 Revert "glusterd: disallow rebalance & remove-brick on a sharded volume" This reverts commit 8375b3d70d5c6268c6770b42a18b2e1bc09e411e. Now that some of the users have confirmed rebalance works fine without causing corruption of VMs, time to revert the CLI restriction. Change-Id: I45493fcbb1f25fd0fff27b2b3526c42642ccb464 BUG: 1460585 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: https://review.gluster.org/17506 Reviewed-by: Atin Mukherjee <amukherj> Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Raghavendra G <rgowdapp> CentOS-regression: Gluster Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report. glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html [2] https://www.gluster.org/pipermail/gluster-users/