This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours
Bug 797887 - remove-brick status gives wrong and unrelated error message.
remove-brick status gives wrong and unrelated error message.
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: cli (Show other bugs)
mainline
x86_64 Linux
high Severity high
: ---
: ---
Assigned To: shishir gowda
:
Depends On:
Blocks: 817967
  Show dependency treegraph
 
Reported: 2012-02-27 07:54 EST by Vijaykumar Koppad
Modified: 2014-08-24 20:49 EDT (History)
3 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-24 13:42:36 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Vijaykumar Koppad 2012-02-27 07:54:09 EST
Description of problem:

Volume Name: doa
Type: Distributed-Replicate
Volume ID: 32eaa11d-743c-4e4d-99a6-6993a732e869
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.1.56:/root/bricks/doa/d1
Brick2: 192.168.1.56:/root/bricks/doa/d3
Brick3: 192.168.1.56:/root/bricks/doa/d2
Brick4: 192.168.1.56:/root/bricks/doa/d4
Options Reconfigured:
geo-replication.indexing: on
features.quota: on
root@vostro:~/programming# gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 start
Remove Brick successful

gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 status
Rebalance is not running on volume doa

Version-Release number of selected component (if applicable):Glusterfs master
[728de5be7ce2975efb59bb5928fd7261d5ec7760]

How reproducible:always


Steps to Reproduce:
1.create s distributed replicate volume. 
2.start remove-brick replicate sub-volume
3.Then check remove-brick status. It gives rebalance is not  started
Comment 1 shishir gowda 2012-03-15 02:15:51 EDT
The above description of the bug is invalid.
`remove-brick start` uses rebalance to decommission a node. Hence the status that was displayed.

But, Since rebalance has been enhanced, the status displayed by remove-brick status should mirror those updates. So keeping the bug open.
Comment 2 Vijaykumar Koppad 2012-03-15 05:55:41 EDT
In the latest git pull it says differently [d05708d7976a8340ae7647fd26f38f22f1863b6a], it says ,

 Volume Name: doa
Type: Distributed-Replicate
Volume ID: 57932a5f-a8ef-42d6-9b67-837b65bb7f79
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.1.56:/root/bricks/doa/d1
Brick2: 192.168.1.56:/root/bricks/doa/d3
Brick3: 192.168.1.56:/root/bricks/doa/d2
Brick4: 192.168.1.56:/root/bricks/doa/d4
Options Reconfigured:
geo-replication.indexing: on
features.quota: on
root@vostro:~/programming# gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 status
remove-brick not started
root@vostro:~/programming# gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 start
Remove Brick successful
root@vostro:~/programming# gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 status
remove-brick not started


Either it should fail while starting the remove-brick  or it should give proper status message in the status.
Comment 3 Anand Avati 2012-03-18 02:40:56 EDT
CHANGE: http://review.gluster.com/2949 (cli/remove-brick: Enhance remove-brick status to display) merged in master by Anand Avati (avati@redhat.com)

Note You need to log in before you can comment on or make changes to this bug.