Bug 1386127 - Remove-brick status output is showing status of fix-layout instead of original remove-brick status output
Summary: Remove-brick status output is showing status of fix-layout instead of origina...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: distribute
Version: rhgs-3.2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: RHGS 3.2.0
Assignee: Nithya Balachandran
QA Contact: Prasad Desala
URL:
Whiteboard:
Depends On: 1389697 1396109
Blocks: 1351528
TreeView+ depends on / blocked
 
Reported: 2016-10-18 08:46 UTC by Prasad Desala
Modified: 2017-03-23 06:11 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.8.4-6
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1389697 (view as bug list)
Environment:
Last Closed: 2017-03-23 06:11:45 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Prasad Desala 2016-10-18 08:46:25 UTC
Description of problem:
=======================
When the below steps were performed remove-brick status output is showing status of layout fixing instead of remove-brick output.

Version-Release number of selected component (if applicable):
3.8.4-2.el7rhgs.x86_64

How reproducible:
=================
1/1

Steps to Reproduce:
===================
1. Create a distributed replica volume and start it.
2. FUSE mount the volume.
3. Fix the layout by issuing "gluster volume rebalance <vol-name> fix-layout start".
4. After completion of step-3, start remove-brick and check the status of the remove-brick, it will show the correct output.
[root@dhcp42-7 ~]# gluster volume remove-brick distrep replica 2 10.70.42.7:/bricks/brick3/b3 10.70.41.211:/bricks/brick3/b3 status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhst               50       113.7KB          9969             0             0            completed        0:2:7
                            10.70.41.211                0        0Bytes             0             0             0            completed        0:1:17

5. After completion of remove-brick, commit it.
6. Now again Fix the layout using command gluster volume rebalance <vol-name> fix-layout start".
7. After completion of step-6, start remove-brick and check the status.

It is showing status of fix-layout instead of remove-brick output.

[root@dhcp42-7 ~]# gluster v remove-brick distrep replica 2 10.70.43.141:/bricks/brick2/b2 10.70.43.156:/bricks/brick2/b2 status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
10.70.43.141                   fix-layout in progress        0:0:7
10.70.43.156                   fix-layout in progress        0:0:7

...
..
.
[root@dhcp42-7 ~]# gluster v remove-brick distrep replica 2 10.70.43.141:/bricks/brick2/b2 10.70.43.156:/bricks/brick2/b2 status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
10.70.43.141                     fix-layout completed        0:5:12
10.70.43.156                     fix-layout completed        0:1:41

Actual results:
===============
remove-brick status output is showing status of layout fixing.

Expected results:
=================
remove-brick status output should original remove-brick status output.

Comment 3 Nithya Balachandran 2016-10-28 06:41:58 UTC
Steps to reproduce this on 3.2.0:

1. On a 2 node cluster, create a volume with 1 brick on each node.
2. From node1, run 
gluster v rebalance <volname> fix-layout start

3. Once the fix-layout has completed,  from node1, run 
gluster v remove-brick <volname> <brick on node2> start

4. On node1, run
gluster v remove-brick <volname> <brick on node2> status


This will print the fix-layout output.

Running the command on node2 prints the output correctly.
gluster v remove-brick <volname> <brick on node2> status

Comment 4 Nithya Balachandran 2016-10-28 10:20:13 UTC
Upstream patch at: 
http://review.gluster.org/15749

Comment 10 Atin Mukherjee 2016-11-18 04:04:16 UTC
upstream mainline : http://review.gluster.org/15749
upstream 3.9 : http://review.gluster.org/#/c/15870/

Comment 12 Prasad Desala 2016-12-13 06:48:13 UTC
Verified this BZ on glusterfs version 3.8.4-8.el7rhgs.x86_64.

Below are the steps,
1) Created a distributed replica volume and started it.
2) FUSE mount the volume.
3) From node1, fix the layout by issuing "gluster volume rebalance <vol-name> fix-layout start".
4) After fixing the layout, from node 1 removed the peer subvol bricks.
gluster v remove-brick <volname> <brick on node3>  <brick on node4>start
5) From node 1, checked the remove-brick status and it is showing the correct remove-brick status output.

Moving this BZ to Verified.

Comment 14 errata-xmlrpc 2017-03-23 06:11:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.