Bug 1030426 - Remove-brick status command actually shows the status of the last run rebalance
Remove-brick status command actually shows the status of the last run rebalance
Status: CLOSED DEFERRED
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: distribute (Show other bugs)
2.1
Unspecified Unspecified
unspecified Severity medium
: ---
: ---
Assigned To: Nithya Balachandran
storage-qa-internal@redhat.com
:
Depends On:
Blocks: 1286203
  Show dependency treegraph
 
Reported: 2013-11-14 07:28 EST by Shruti Sampat
Modified: 2015-11-27 07:28 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1286203 (view as bug list)
Environment:
Last Closed: 2015-11-27 07:28:23 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Shruti Sampat 2013-11-14 07:28:35 EST
Description of problem:
-----------------------
After stopping remove-brick operation on a volume, rebalance was run on the same volume. After completion of the rebalance operation, remove-brick status command, does not show the status of the most recent remove-brick operation, instead shows the status of the last rebalance.

Output of remove-brick status before rebalance was started -

[root@rhs ~]# gluster v remove-brick dis_vol 10.70.37.77:/rhs/brick1/b1 status
                                    Node Rebalanced-files          size       scanned      failures       skipped         status run-time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------
                               localhost               14        13.7GB            72             0             0      completed           262.00
                             10.70.37.96                0        0Bytes             0             0             0    not started             0.00
                            10.70.37.159                0        0Bytes             0             0             0    not started             0.00
                            10.70.37.140                0        0Bytes             0             0             0    not started             0.00


Output of rebalance status after it was completed -

[root@rhs ~]# gluster v rebalance dis_vol status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes            71             0             0            completed               0.00
                             10.70.37.96                0        0Bytes            85             0             0            completed               0.00
                            10.70.37.159                0        0Bytes            87             0             0            completed               0.00
                            10.70.37.140                0        0Bytes            87             0             0            completed               0.00

Output of remove-brick status after completion of rebalance - 

[root@rhs ~]# gluster v remove-brick dis_vol 10.70.37.77:/rhs/brick1/b1 status                                                                                                                                 
                                    Node Rebalanced-files          size       scanned      failures       skipped         status run-time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes            71             0             0      completed             0.00
                             10.70.37.96                0        0Bytes            85             0             0      completed             0.00
                            10.70.37.159                0        0Bytes            87             0             0      completed             0.00
                            10.70.37.140                0        0Bytes            87             0             0      completed             0.00

The remove brick status should always show the status of the last remove-brick operation run.

Version-Release number of selected component (if applicable):
glusterfs 3.4.0.42.1u2rhs

How reproducible:
Always

Steps to Reproduce:
1. Start remove-brick on a volume and after the data migration is completed, stop the remove brick operation, check the remove-brick status.
2. Start rebalance on the same volume.
3. After rebalance completes, check the status.
4. Check the remove-brick status for the previous remove-brick operation ( started in step 1 )
 
Actual results:
Remove brick status shows status of the previous rebalance.

Expected results:
Remove-brick status should show the status of the most recent remove-brick operation.

Additional info:
Comment 4 Susant Kumar Palai 2015-11-27 07:28:23 EST
Cloning this to 3.1. To be fixed in future release.

Note You need to log in before you can comment on or make changes to this bug.