Bug 951488 - DHT - rebalance - gluster volume rebalance <volname> status shows output even though User hasn't run rebalance on that volume (it shows remove-brick status)
Summary: DHT - rebalance - gluster volume rebalance <volname> status shows output even...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: RHGS 3.0.0
Assignee: Gaurav Kumar Garg
QA Contact: amainkar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-04-12 10:16 UTC by Rachana Patel
Modified: 2016-06-05 23:37 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.6.0.3-1.el6rhs/RHSS 3.0
Doc Type: Bug Fix
Doc Text:
Cause: command gluster volume rebalance <volname> status was showing rebalance status even though user hasn't start rebalance process on that volume. Consequence: Even though User hasn't run rebalance command it is showing status of the rebalance process. Fix: When rebalance process on volume start it will set GD_OP_REBALANCE op in volinfo. if user have not start rebalance process on specific volume and if user issue rebalance status command then at staging stage it will validate whether op "GD_OP_REBALANCE" set in volinfo or not. If it's not set then command will fail and give error message "Rebalance not started." Result: command "gluster volume rebalance <volname> status" not showing volume status if user hasn't start rebalance process on that volume.
Clone Of:
: 1089668 (view as bug list)
Environment:
Last Closed: 2014-09-22 19:27:59 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1278 0 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description Rachana Patel 2013-04-12 10:16:33 UTC
Description of problem:
DHT - rebalance - gluster volume rebalance <volname> status shows output even though User hasn't run rebalance on that volume (it shows remove-brick status)

Version-Release number of selected component (if applicable):
3.3.0.7rhs-1.el6rhs.x86_64

How reproducible:
always

Steps to Reproduce:
1. Create a Distributed volume having 2 or more sub-volume and start the volume.
[root@cutlass issue9]# gluster volume info issue11
 
Volume Name: issue11
Type: Distribute
Volume ID: c6a094bc-649e-48c4-ac6c-f015e55a0a40
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: mia.lab.eng.blr.redhat.com:/brick1/11
Brick2: fan.lab.eng.blr.redhat.com:/brick1/11
Brick3: fred.lab.eng.blr.redhat.com:/brick1/11



2. Fuse Mount the volume from the client-1 using “mount -t glusterfs  server:/<volume> <client-1_mount_point>”

mount -t glusterfs XXX:/issue11 /mnt/issue11

3. From mount point create some files at root level.
 for i in `seq 20 50` ; do touch /mnt/issue11/n$i ; done

4.remove brick/s from volume
gluster volume remove-brick issue11 fred.lab.eng.blr.redhat.com:/brick1/11 start


5. do not rebalance command but check status of rebalance command

[root@cutlass issue9]# gluster volume rebalance issue11 status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost                0            0            0            0    not started
              mia.lab.eng.blr.redhat.com                0            0            0            0    not started
                             10.70.34.91                0            0            0            0    not started
             fred.lab.eng.blr.redhat.com                7            0           31            0      completed

  
Actual results:
even though User hasn't run rebalance command it is showing status of the command. 


Expected results:
1)  As a part of remove brick(to migrate data) we run rebalance in background but it should not show status of that command to User when User checks status of rebalance

2) status is misleading - for all other bricks it says 'not started'. Does it mean process is not started yet and rebalance is still running in background???



Additional info:

Comment 3 Scott Haines 2013-09-27 17:07:45 UTC
Targeting for 3.0.0 (Denali) release.

Comment 6 Gaurav Kumar Garg 2014-04-23 12:30:51 UTC
Block: 1089668

Comment 7 Nagaprasad Sathyanarayana 2014-05-06 11:43:40 UTC
Dev ack to 3.0 RHS BZs

Comment 8 Rachana Patel 2014-06-17 12:08:19 UTC
verified with -3.6.0.18-1.el6rhs.x86_64

working as expected hence moving to verified

Comment 10 errata-xmlrpc 2014-09-22 19:27:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html


Note You need to log in before you can comment on or make changes to this bug.