Description of problem: Volume Name: doa Type: Distributed-Replicate Volume ID: 32eaa11d-743c-4e4d-99a6-6993a732e869 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 192.168.1.56:/root/bricks/doa/d1 Brick2: 192.168.1.56:/root/bricks/doa/d3 Brick3: 192.168.1.56:/root/bricks/doa/d2 Brick4: 192.168.1.56:/root/bricks/doa/d4 Options Reconfigured: geo-replication.indexing: on features.quota: on root@vostro:~/programming# gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 start Remove Brick successful gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 status Rebalance is not running on volume doa Version-Release number of selected component (if applicable):Glusterfs master [728de5be7ce2975efb59bb5928fd7261d5ec7760] How reproducible:always Steps to Reproduce: 1.create s distributed replicate volume. 2.start remove-brick replicate sub-volume 3.Then check remove-brick status. It gives rebalance is not started
The above description of the bug is invalid. `remove-brick start` uses rebalance to decommission a node. Hence the status that was displayed. But, Since rebalance has been enhanced, the status displayed by remove-brick status should mirror those updates. So keeping the bug open.
In the latest git pull it says differently [d05708d7976a8340ae7647fd26f38f22f1863b6a], it says , Volume Name: doa Type: Distributed-Replicate Volume ID: 57932a5f-a8ef-42d6-9b67-837b65bb7f79 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 192.168.1.56:/root/bricks/doa/d1 Brick2: 192.168.1.56:/root/bricks/doa/d3 Brick3: 192.168.1.56:/root/bricks/doa/d2 Brick4: 192.168.1.56:/root/bricks/doa/d4 Options Reconfigured: geo-replication.indexing: on features.quota: on root@vostro:~/programming# gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 status remove-brick not started root@vostro:~/programming# gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 start Remove Brick successful root@vostro:~/programming# gluster --mode=script volume remove-brick doa replica 2 192.168.1.56:/root/bricks/doa/d2 192.168.1.56:/root/bricks/doa/d4 status remove-brick not started Either it should fail while starting the remove-brick or it should give proper status message in the status.
CHANGE: http://review.gluster.com/2949 (cli/remove-brick: Enhance remove-brick status to display) merged in master by Anand Avati (avati)