DHT - rebalance - gluster volume rebalance <volname> status shows output even though User hasn't run rebalance on that volume (it shows remove-brick status)
Cause:
command gluster volume rebalance <volname> status was showing rebalance status even though user hasn't start rebalance process on that volume.
Consequence:
Even though User hasn't run rebalance command it is showing status of the rebalance process.
Fix:
When rebalance process on volume start it will set GD_OP_REBALANCE op in volinfo. if user have not start rebalance process on specific volume and if user issue rebalance status command then at staging stage it will validate whether op "GD_OP_REBALANCE" set in volinfo or not. If it's not set then command will fail and give error message "Rebalance not started."
Result:
command "gluster volume rebalance <volname> status" not showing volume status if user hasn't start rebalance process on that volume.
Description of problem:
DHT - rebalance - gluster volume rebalance <volname> status shows output even though User hasn't run rebalance on that volume (it shows remove-brick status)
Version-Release number of selected component (if applicable):
3.3.0.7rhs-1.el6rhs.x86_64
How reproducible:
always
Steps to Reproduce:
1. Create a Distributed volume having 2 or more sub-volume and start the volume.
[root@cutlass issue9]# gluster volume info issue11
Volume Name: issue11
Type: Distribute
Volume ID: c6a094bc-649e-48c4-ac6c-f015e55a0a40
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: mia.lab.eng.blr.redhat.com:/brick1/11
Brick2: fan.lab.eng.blr.redhat.com:/brick1/11
Brick3: fred.lab.eng.blr.redhat.com:/brick1/11
2. Fuse Mount the volume from the client-1 using “mount -t glusterfs server:/<volume> <client-1_mount_point>”
mount -t glusterfs XXX:/issue11 /mnt/issue11
3. From mount point create some files at root level.
for i in `seq 20 50` ; do touch /mnt/issue11/n$i ; done
4.remove brick/s from volume
gluster volume remove-brick issue11 fred.lab.eng.blr.redhat.com:/brick1/11 start
5. do not rebalance command but check status of rebalance command
[root@cutlass issue9]# gluster volume rebalance issue11 status
Node Rebalanced-files size scanned failures status
--------- ----------- ----------- ----------- ----------- ------------
localhost 0 0 0 0 not started
mia.lab.eng.blr.redhat.com 0 0 0 0 not started
10.70.34.91 0 0 0 0 not started
fred.lab.eng.blr.redhat.com 7 0 31 0 completed
Actual results:
even though User hasn't run rebalance command it is showing status of the command.
Expected results:
1) As a part of remove brick(to migrate data) we run rebalance in background but it should not show status of that command to User when User checks status of rebalance
2) status is misleading - for all other bricks it says 'not started'. Does it mean process is not started yet and rebalance is still running in background???
Additional info:
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
http://rhn.redhat.com/errata/RHEA-2014-1278.html
Description of problem: DHT - rebalance - gluster volume rebalance <volname> status shows output even though User hasn't run rebalance on that volume (it shows remove-brick status) Version-Release number of selected component (if applicable): 3.3.0.7rhs-1.el6rhs.x86_64 How reproducible: always Steps to Reproduce: 1. Create a Distributed volume having 2 or more sub-volume and start the volume. [root@cutlass issue9]# gluster volume info issue11 Volume Name: issue11 Type: Distribute Volume ID: c6a094bc-649e-48c4-ac6c-f015e55a0a40 Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: mia.lab.eng.blr.redhat.com:/brick1/11 Brick2: fan.lab.eng.blr.redhat.com:/brick1/11 Brick3: fred.lab.eng.blr.redhat.com:/brick1/11 2. Fuse Mount the volume from the client-1 using “mount -t glusterfs server:/<volume> <client-1_mount_point>” mount -t glusterfs XXX:/issue11 /mnt/issue11 3. From mount point create some files at root level. for i in `seq 20 50` ; do touch /mnt/issue11/n$i ; done 4.remove brick/s from volume gluster volume remove-brick issue11 fred.lab.eng.blr.redhat.com:/brick1/11 start 5. do not rebalance command but check status of rebalance command [root@cutlass issue9]# gluster volume rebalance issue11 status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 0 0 0 0 not started mia.lab.eng.blr.redhat.com 0 0 0 0 not started 10.70.34.91 0 0 0 0 not started fred.lab.eng.blr.redhat.com 7 0 31 0 completed Actual results: even though User hasn't run rebalance command it is showing status of the command. Expected results: 1) As a part of remove brick(to migrate data) we run rebalance in background but it should not show status of that command to User when User checks status of rebalance 2) status is misleading - for all other bricks it says 'not started'. Does it mean process is not started yet and rebalance is still running in background??? Additional info: