Bug 1024725 - For the nodes which does not participate in remove brick , remove brick status gives the ouptut of rebalance.
For the nodes which does not participate in remove brick , remove brick statu...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
Unspecified Unspecified
high Severity high
: ---
: RHGS 2.1.2
Assigned To: Krutika Dhananjay
RamaKasturi
: ZStream
Depends On:
Blocks: 1040371
  Show dependency treegraph
 
Reported: 2013-10-30 06:30 EDT by RamaKasturi
Modified: 2015-05-13 12:26 EDT (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.50rhs-1
Doc Type: Bug Fix
Doc Text:
Previously, status from a previous remove-brick or Rebalance operation was not reset before starting a new remove-brick or Rebalance operation. As a result, remove-brick status displayed the output of a previous Rebalance operation on those nodes which did not participate in an ongoing remove-brick operation. With this update, the status of the remove-brick or Rebalance operation is set to NOT-STARTED before starting remove-brick or Rebalance operations again on all the nodes in the cluster.
Story Points: ---
Clone Of:
: 1040371 (view as bug list)
Environment:
Last Closed: 2014-02-25 02:58:03 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Attaching screenshot (69.60 KB, image/png)
2013-12-03 08:05 EST, RamaKasturi
no flags Details
Attaching screenshot (114.35 KB, image/png)
2013-12-03 08:06 EST, RamaKasturi
no flags Details

  None (edit)
Description RamaKasturi 2013-10-30 06:30:42 EDT
Description of problem:
For the nodes which does not participate in remove brick , remove brick status gives the ouptut of rebalance.

Version-Release number of selected component (if applicable):
glusterfs-3.4.0.35.1u2rhs-1.el6rhs.x86_64
glusterfs-geo-replication-3.4.0.35.1u2rhs-1.el6rhs.x86_64
glusterfs-rdma-3.4.0.35.1u2rhs-1.el6rhs.x86_64
glusterfs-debuginfo-3.4.0.35.1u2rhs-1.el6rhs.x86_64
glusterfs-libs-3.4.0.35.1u2rhs-1.el6rhs.x86_64
glusterfs-fuse-3.4.0.35.1u2rhs-1.el6rhs.x86_64
glusterfs-server-3.4.0.35.1u2rhs-1.el6rhs.x86_64
glusterfs-api-3.4.0.35.1u2rhs-1.el6rhs.x86_64
samba-glusterfs-3.6.9-160.3.el6rhs.x86_64


How reproducible:
Always

Steps to Reproduce:
1. Create a distribute volume with 2 bricks.
2. start rebalance on the volume and stop it.
3. Now rebalance status shows stopped for the nodes where rebalance was running. Following is the ouput for the same.

[root@localhost ~]# gluster vol rebalance vol_dis_rep status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                2         2.0GB            61             0            20            completed              66.00
                            10.70.37.140                0        0Bytes            60             0             0            completed               0.00
                             10.70.37.75                0        0Bytes             0             0             0          not started               0.00
                             10.70.37.43                0        0Bytes             0             0             0              stopped               0.00
volume rebalance: vol_dis_rep: success: 

4. Now start remove brick.
5. Once started check the ouput. The following is what it displays.

[root@localhost ~]# gluster vol remove-brick vol_dis_rep 10.70.37.108:/rhs/brick3/b5 10.70.37.140:/rhs/brick3/b6  status
                                    Node Rebalanced-files          size       scanned      failures       skipped         status run-time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------
                               localhost                2         2.0GB            61             0             0      completed            66.00
                            10.70.37.140                0        0Bytes            60             0             0      completed             0.00
                             10.70.37.75                0        0Bytes             0             0             0    not started             0.00
                             10.70.37.43                0        0Bytes             0             0             0        stopped             0.00



Actual results:
For the nodes on which remove brick is not started, it shows ouput of rebalance status.

Expected results:
For the nodes on which remove brick is not started it should show status as "notstarted" or the nodes which does not participate in the remove brick should not be shown in the status.

Additional info:
Comment 2 Dusmant 2013-10-30 07:08:51 EDT
RHSC remove brick status is showing wrong data... This needs to be fixed...
Comment 4 RamaKasturi 2013-12-03 08:05:23 EST
Remove-brick status output still have the nodes where rebalance was stopped.

Attaching the screenshot for the same.

From screenshot 8, it is clear that rebalance was stopped on localhost and 10.70.37.182.

From screenshot 9, it is clear that remove-brick was started on 10.70.37.177 and 10.70.37.109.

10.70.37.182 was not at all participating in remove-brick but it was still shown in the remove-brick status which is incorrect.

So moving this back to assgined.
Comment 5 RamaKasturi 2013-12-03 08:05:57 EST
Created attachment 832041 [details]
Attaching screenshot
Comment 6 RamaKasturi 2013-12-03 08:06:24 EST
Created attachment 832042 [details]
Attaching screenshot
Comment 9 RamaKasturi 2013-12-19 00:08:48 EST
verified and works fine with glusterfs-server-3.4.0.50rhs-1.el6rhs.x86_64.

Now remove-brick does not give the output of rebalance. Remove-brick status shows only the nodes which participates in remove-brick.
Comment 10 Pavithra 2014-01-07 02:02:54 EST
Can you please review the doc text for technical accuracy?
Comment 11 Krutika Dhananjay 2014-01-07 02:05:17 EST
LGTM, Pavithra.
Comment 13 errata-xmlrpc 2014-02-25 02:58:03 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html

Note You need to log in before you can comment on or make changes to this bug.