+++ This bug was initially created as a clone of Bug #1046284 +++ +++ This bug was initially created as a clone of Bug #834729 +++ Description of problem: Running "gluster volume remove-brick $volume $brick" without any arguments runs force commit, which causes loss of data (though it warns you). You are not told that the default is force commit. Defaulting to start, or not defaulting at all may be more user friendly. Version-Release number of selected component (if applicable): glusterfs 3.3.0 built on May 31 2012 11:16:29 How reproducible: Steps to Reproduce: 1.Have a started redistribute gluster volume with at least 2 bricks and several files. 2.Remove gluster volume without specifying any options: "gluster volume remove-brick $volume $brick". 3.ls the gluster volume, there are less files than you began with. Actual results: Gluster runs "volume remove-brick $volume $brick force". Data is lost. Expected results: Either print the usage, or default to start instead of force. Additional info: --- Additional comment from Amar Tumballi on 2012-06-23 11:54:01 EDT --- Hi, Actually the behavior is because of backward compatibility with 3.1.x and 3.2.x versions. In those versions there was no 'start' option, and hence it used to remove bricks. We kept the same behavior with 3.3.0 too. That is the reason why you get the question when you just do remove-brick without options, asking if its ok to continue because there can be data loss. This is not a bug, but the intended behavior considering the backward compatibility. --- Additional comment from Amar Tumballi on 2012-07-11 01:18:33 EDT --- as explained in comment #1 --- Additional comment from RHEL Product and Program Management on 2013-12-24 05:54:56 EST --- Since this issue was entered in bugzilla, the release flag has been set to ? to ensure that it is properly evaluated for this release. --- Additional comment from on 2013-12-24 05:57:29 EST --- Hi, We have got this RFE request (upstream : Bug #834729) from one of our RHS customer. I have found above given upstream RFE and came to know from Amar comment that it got closed because of backward compatibility with 3.1.x and 3.2.x versions. Please let me know it is still the same with our RHS also or we can change the default behaviour of `gluster volume remove-brick` command from `commit+force` to `start` in RHS. Thanks, Vikhyat
REVIEW: http://review.gluster.org/7292 (gluster-cli: gluster volume remove-brick defaults to force commit, and causes data loss) posted (#1) for review on master by Atin Mukherjee (amukherj)
REVIEW: http://review.gluster.org/7292 (cli: remove-brick no longer defaults to commit-force) posted (#2) for review on master by Atin Mukherjee (amukherj)
REVIEW: http://review.gluster.org/7292 (cli: remove-brick no longer defaults to commit-force) posted (#3) for review on master by Atin Mukherjee (amukherj)
REVIEW: http://review.gluster.org/7302 (cli: Deprecation message added for remove-brick op with dfault behaviour.) posted (#1) for review on release-3.5 by Atin Mukherjee (amukherj)
Fix has been done, and a +2 is received for the changes, upstream merge is pending.
REVIEW: http://review.gluster.org/7302 (cli: Deprecation message added for remove-brick op with default behaviour.) posted (#2) for review on release-3.5 by Atin Mukherjee (amukherj)
REVIEW: http://review.gluster.org/7292 (cli: remove-brick no longer defaults to commit-force) posted (#4) for review on master by Atin Mukherjee (amukherj)
REVIEW: http://review.gluster.org/7302 (cli: Deprecation message added for remove-brick op with default behaviour.) posted (#3) for review on release-3.5 by Atin Mukherjee (amukherj)
REVIEW: http://review.gluster.org/7302 (cli: Deprecation message added for remove-brick op with default behaviour.) posted (#4) for review on release-3.5 by Atin Mukherjee (amukherj)
REVIEW: http://review.gluster.org/7292 (cli: remove-brick no longer defaults to commit-force) posted (#5) for review on master by Atin Mukherjee (amukherj)
REVIEW: http://review.gluster.org/7292 (cli: remove-brick no longer defaults to commit-force) posted (#6) for review on master by Atin Mukherjee (amukherj)
COMMIT: http://review.gluster.org/7292 committed in master by Vijay Bellur (vbellur) ------ commit 5dedef81b6ef91d462ce49ded4e148dfc17deee2 Author: Atin Mukherjee <amukherj> Date: Wed Mar 19 11:30:22 2014 +0530 cli: remove-brick no longer defaults to commit-force Problem : When gluster volume remove-brick is executed with out any option, it defaults to force commit which results in data loss. Fix : remove-brick can not be executed with out explicit option, user needs to provide the option in the command line else the command will throw back an usage error. Earlier usage : volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... [start|stop|status|commit|force] Current usage : volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force> Change-Id: I2a49131f782a6c0dcd03b4dc8ebe5907999b0b49 BUG: 1077682 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: http://review.gluster.org/7292 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Shyamsundar Ranganathan <sam.somari> Reviewed-by: Vijay Bellur <vbellur>
COMMIT: http://review.gluster.org/7302 committed in release-3.5 by Vijay Bellur (vbellur) ------ commit 92d3d8b8cb8fc5886d71fe184339fdcbeb5439db Author: Atin Mukherjee <amukherj> Date: Thu Mar 20 10:46:25 2014 +0530 cli: Deprecation message added for remove-brick op with default behaviour. Background : From 3.6 version and onwards, remove-brick can be executed with explicit options only. Change-Id: Ibe376e371c5aa7a68621cf4ec2e74c6809614f9b BUG: 1077682 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: http://review.gluster.org/7302 Reviewed-by: Kaushal M <kaushal> Tested-by: Gluster Build System <jenkins.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report. glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user