Bug 797887 - remove-brick status gives wrong and unrelated error message.
Summary: remove-brick status gives wrong and unrelated error message.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: cli
Version: mainline
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
Assignee: shishir gowda
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 817967
TreeView+ depends on / blocked
 
Reported: 2012-02-27 12:54 UTC by Vijaykumar Koppad
Modified: 2014-08-25 00:49 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-07-24 17:42:36 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Vijaykumar Koppad 2012-02-27 12:54:09 UTC
Description of problem:

Volume Name: doa
Type: Distributed-Replicate
Volume ID: 32eaa11d-743c-4e4d-99a6-6993a732e869
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.1.56:/root/bricks/doa/d1
Brick2: 192.168.1.56:/root/bricks/doa/d3
Brick3: 192.168.1.56:/root/bricks/doa/d2
Brick4: 192.168.1.56:/root/bricks/doa/d4
Options Reconfigured:
geo-replication.indexing: on
features.quota: on
root@vostro:~/programming# gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 start
Remove Brick successful

gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 status
Rebalance is not running on volume doa

Version-Release number of selected component (if applicable):Glusterfs master
[728de5be7ce2975efb59bb5928fd7261d5ec7760]

How reproducible:always


Steps to Reproduce:
1.create s distributed replicate volume. 
2.start remove-brick replicate sub-volume
3.Then check remove-brick status. It gives rebalance is not  started

Comment 1 shishir gowda 2012-03-15 06:15:51 UTC
The above description of the bug is invalid.
`remove-brick start` uses rebalance to decommission a node. Hence the status that was displayed.

But, Since rebalance has been enhanced, the status displayed by remove-brick status should mirror those updates. So keeping the bug open.

Comment 2 Vijaykumar Koppad 2012-03-15 09:55:41 UTC
In the latest git pull it says differently [d05708d7976a8340ae7647fd26f38f22f1863b6a], it says ,

 Volume Name: doa
Type: Distributed-Replicate
Volume ID: 57932a5f-a8ef-42d6-9b67-837b65bb7f79
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.1.56:/root/bricks/doa/d1
Brick2: 192.168.1.56:/root/bricks/doa/d3
Brick3: 192.168.1.56:/root/bricks/doa/d2
Brick4: 192.168.1.56:/root/bricks/doa/d4
Options Reconfigured:
geo-replication.indexing: on
features.quota: on
root@vostro:~/programming# gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 status
remove-brick not started
root@vostro:~/programming# gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 start
Remove Brick successful
root@vostro:~/programming# gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 status
remove-brick not started


Either it should fail while starting the remove-brick  or it should give proper status message in the status.

Comment 3 Anand Avati 2012-03-18 06:40:56 UTC
CHANGE: http://review.gluster.com/2949 (cli/remove-brick: Enhance remove-brick status to display) merged in master by Anand Avati (avati)


Note You need to log in before you can comment on or make changes to this bug.