Description of problem: In the output of 'gluster volume remove-brick <VOLUME> <BRICK(s)> status, the status field value for the Red Hat Storage (RHS) server nodes that do require any action is given as 'not started'. This is ambiguous in that it may be taken to mean that the action is yet not started, but will be started later, when the real meaning should be 'no action required'. -------------------------------------------------------------------- # gluster volume remove-brick RHS_VM_imagestore RHS01:/brick6 RHS02:/brick6 startRemove Brick start successful # gluster volume remove-brick RHS_VM_imagestore RHS01:/brick6 RHS02:/brick6 status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 1 0 9 0 in progress RHS02 0 0 31 0 completed RHS03 0 0 0 0 not started RHS04 0 0 0 0 not started -------------------------------------------------------------------- Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: In the output of 'gluster volume remove-brick <VOLUME> <BRICK(s)> status, the status field value for the Red Hat Storage (RHS) server nodes that do require any action is given as 'not started'. Expected results: In the output of 'gluster volume remove-brick <VOLUME> <BRICK(s)> status, the status field value for the Red Hat Storage (RHS) server nodes that do require any action should be 'no action required', to avoid ambiguity. Additional info:
REVIEW: http://review.gluster.org/5383 (cli :remove-brick process output leads to ambiguity) posted (#1) for review on master by susant palai (spalai)
COMMIT: http://review.gluster.org/5383 committed in master by Vijay Bellur (vbellur) ------ commit e45e0037f6df6a0fab846a83fb2c99bb09417cf4 Author: susant <spalai> Date: Wed Jul 24 14:11:55 2013 +0530 cli :remove-brick process output leads to ambiguity The output of remove-brick status as "Not started" leads to ambiguity.We should not show the status of the Server nodes which do not participate in the remove-brick process. Change-Id: I85fea40deb15f3e2dd5487d881f48c9aff7221de BUG: 986896 Signed-off-by: susant <spalai> Reviewed-on: http://review.gluster.org/5383 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report. glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user