Description of problem: ======================= When executed a command "gluster volume remove-brick" without volume name it did not return the usage error. [root@rhs-client11 ~]# gluster volume remove-brick 10.70.36.35:/rhs/brick1/nb1 10.70.36.36:/rhs/brick1/nb2 start [root@rhs-client11 ~]# Version-Release number of selected component (if applicable): ============================================================= [root@rhs-client11 ~]# rpm -qa | grep gluster glusterfs-fuse-3.4.0.1rhs-1.el6rhs.x86_64 gluster-swift-container-1.4.8-4.el6.noarch gluster-swift-1.4.8-4.el6.noarch gluster-swift-doc-1.4.8-4.el6.noarch vdsm-gluster-4.10.2-4.0.qa5.el6rhs.noarch gluster-swift-plugin-1.0-5.noarch gluster-swift-proxy-1.4.8-4.el6.noarch gluster-swift-account-1.4.8-4.el6.noarch glusterfs-geo-replication-3.4.0.1rhs-1.el6rhs.x86_64 org.apache.hadoop.fs.glusterfs-glusterfs-0.20.2_0.2-1.noarch glusterfs-3.4.0.1rhs-1.el6rhs.x86_64 glusterfs-server-3.4.0.1rhs-1.el6rhs.x86_64 glusterfs-rdma-3.4.0.1rhs-1.el6rhs.x86_64 gluster-swift-object-1.4.8-4.el6.noarch [root@rhs-client11 ~]# How reproducible: ================= Steps to Reproduce: ================== 1. Execute the "gluster volume remove-brick <vol-name> <bricks>" command without volume name Actual results: =============== [root@rhs-client11 ~]# gluster volume info vol-dis-rep Volume Name: vol-dis-rep Type: Distributed-Replicate Volume ID: 5d6c5e6b-9ab5-450c-8fb1-9e33a16acb64 Status: Started Number of Bricks: 9 x 2 = 18 Transport-type: tcp Bricks: Brick1: 10.70.36.35:/rhs/brick1/b1 Brick2: 10.70.36.36:/rhs/brick1/b2 Brick3: 10.70.36.35:/rhs/brick1/b3 Brick4: 10.70.36.36:/rhs/brick1/b4 Brick5: 10.70.36.35:/rhs/brick1/b5 Brick6: 10.70.36.36:/rhs/brick1/b6 Brick7: 10.70.36.37:/rhs/brick1/b7 Brick8: 10.70.36.38:/rhs/brick1/b8 Brick9: 10.70.36.37:/rhs/brick1/b9 Brick10: 10.70.36.38:/rhs/brick1/b10 Brick11: 10.70.36.37:/rhs/brick1/b11 Brick12: 10.70.36.38:/rhs/brick1/b12 Brick13: 10.70.36.35:/rhs/brick1/nb1 Brick14: 10.70.36.36:/rhs/brick1/nb2 Brick15: 10.70.36.37:/rhs/brick1/nb1 Brick16: 10.70.36.38:/rhs/brick1/nb2 Brick17: 10.70.36.37:/rhs/brick1/nb3 Brick18: 10.70.36.38:/rhs/brick1/nb4 [root@rhs-client11 ~]# gluster volume remove-brick 10.70.36.35:/rhs/brick1/nb1 10.70.36.36:/rhs/brick1/nb2 start [root@rhs-client11 ~]# Expected results: ================= It should return with usage or error that volume 10.70.36.35:/rhs/brick1/nb1 doesn't exist
REVIEW: http://review.gluster.org/4972 (cli: Fix remove brick cli out for wrong volume name) posted (#1) for review on master by venkatesh somyajulu (vsomyaju)
REVIEW: http://review.gluster.org/4972 (cli: Fix remove brick cli out for wrong volume name) posted (#2) for review on master by venkatesh somyajulu (vsomyaju)
REVIEW: http://review.gluster.org/4972 (cli: Fix remove brick cli out for wrong volume name) posted (#3) for review on master by venkatesh somyajulu (vsomyaju)
COMMIT: http://review.gluster.org/4972 committed in master by Vijay Bellur (vbellur) ------ commit b3cc22184452824d436903baa62635acee739c50 Author: Venkatesh Somyajulu <vsomyaju> Date: Wed Jun 19 17:54:31 2013 +0530 cli: Fix remove brick cli out for wrong volume name Problem: gluster volume remove-brick command, was not printing the error in case of volume-name specified is wrong. Fix: Fix will print error message to indicate that provided volume name is invalid. Although patch for bug 961669 http://review.gluster.org/#/c/4975/ does print cli-output now, but still xml is unable to use the response values Change-Id: I2ee1df86c1e756fb8e93b4d6bbdd102b4f368f87 BUG: 961307 Signed-off-by: Venkatesh Somyajulu <vsomyaju> Reviewed-on: http://review.gluster.org/4972 Reviewed-by: Pranith Kumar Karampuri <pkarampu> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report. glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user