Bug 797887

Summary: remove-brick status gives wrong and unrelated error message.
Product: [Community] GlusterFS Reporter: Vijaykumar Koppad <vkoppad>
Component: cliAssignee: shishir gowda <sgowda>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: high    
Version: mainlineCC: bbandari, gluster-bugs, nsathyan
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.4.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-07-24 17:42:36 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 817967    

Description Vijaykumar Koppad 2012-02-27 12:54:09 UTC
Description of problem:

Volume Name: doa
Type: Distributed-Replicate
Volume ID: 32eaa11d-743c-4e4d-99a6-6993a732e869
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.1.56:/root/bricks/doa/d1
Brick2: 192.168.1.56:/root/bricks/doa/d3
Brick3: 192.168.1.56:/root/bricks/doa/d2
Brick4: 192.168.1.56:/root/bricks/doa/d4
Options Reconfigured:
geo-replication.indexing: on
features.quota: on
root@vostro:~/programming# gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 start
Remove Brick successful

gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 status
Rebalance is not running on volume doa

Version-Release number of selected component (if applicable):Glusterfs master
[728de5be7ce2975efb59bb5928fd7261d5ec7760]

How reproducible:always


Steps to Reproduce:
1.create s distributed replicate volume. 
2.start remove-brick replicate sub-volume
3.Then check remove-brick status. It gives rebalance is not  started

Comment 1 shishir gowda 2012-03-15 06:15:51 UTC
The above description of the bug is invalid.
`remove-brick start` uses rebalance to decommission a node. Hence the status that was displayed.

But, Since rebalance has been enhanced, the status displayed by remove-brick status should mirror those updates. So keeping the bug open.

Comment 2 Vijaykumar Koppad 2012-03-15 09:55:41 UTC
In the latest git pull it says differently [d05708d7976a8340ae7647fd26f38f22f1863b6a], it says ,

 Volume Name: doa
Type: Distributed-Replicate
Volume ID: 57932a5f-a8ef-42d6-9b67-837b65bb7f79
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.1.56:/root/bricks/doa/d1
Brick2: 192.168.1.56:/root/bricks/doa/d3
Brick3: 192.168.1.56:/root/bricks/doa/d2
Brick4: 192.168.1.56:/root/bricks/doa/d4
Options Reconfigured:
geo-replication.indexing: on
features.quota: on
root@vostro:~/programming# gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 status
remove-brick not started
root@vostro:~/programming# gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 start
Remove Brick successful
root@vostro:~/programming# gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 status
remove-brick not started


Either it should fail while starting the remove-brick  or it should give proper status message in the status.

Comment 3 Anand Avati 2012-03-18 06:40:56 UTC
CHANGE: http://review.gluster.com/2949 (cli/remove-brick: Enhance remove-brick status to display) merged in master by Anand Avati (avati)