+++ This bug was initially created as a clone of Bug #1447606 +++ Description of problem: Expanding a gluster volume that is sharded may cause file corruption Sharded volumes are typically used for VM images, if such volumes are expanded or possibly contracted (i.e add/remove bricks and rebalance) there are reports of VM images getting corrupted. If you are using sharded volumes, DO NOT rebalance them till this is fixed Status of this bug can be tracked here, #1426508
REVIEW: https://review.gluster.org/17181 (glusterd: disallow rebalance & remove-brick on a sharded volume) posted (#2) for review on release-3.11 by Raghavendra Talur (rtalur)
COMMIT: https://review.gluster.org/17181 committed in release-3.11 by Shyamsundar Ranganathan (srangana) ------ commit c72ac23fdc1d41c3a01d20bbad802e7dc7f21c2f Author: Atin Mukherjee <amukherj> Date: Wed May 3 16:42:22 2017 +0530 glusterd: disallow rebalance & remove-brick on a sharded volume Change-Id: Idfbdbc61ca18054fdbf7556f74e195a63cd8a554 BUG: 1447607 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: https://review.gluster.org/17160 Reviewed-by: Raghavendra Talur <rtalur> Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: N Balachandran <nbalacha> Reviewed-by: Amar Tumballi <amarts> CentOS-regression: Gluster Build System <jenkins.org> (cherry picked from commit 8375b3d70d5c6268c6770b42a18b2e1bc09e411e) Reviewed-on: https://review.gluster.org/17181 Tested-by: Raghavendra Talur <rtalur> Reviewed-by: Prashanth Pai <ppai>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/