Bug 1386127
Summary: | Remove-brick status output is showing status of fix-layout instead of original remove-brick status output | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Prasad Desala <tdesala> | |
Component: | distribute | Assignee: | Nithya Balachandran <nbalacha> | |
Status: | CLOSED ERRATA | QA Contact: | Prasad Desala <tdesala> | |
Severity: | medium | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.2 | CC: | amukherj, rhinduja, rhs-bugs, storage-qa-internal | |
Target Milestone: | --- | Keywords: | Regression | |
Target Release: | RHGS 3.2.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.8.4-6 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1389697 (view as bug list) | Environment: | ||
Last Closed: | 2017-03-23 06:11:45 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1389697, 1396109 | |||
Bug Blocks: | 1351528 |
Description
Prasad Desala
2016-10-18 08:46:25 UTC
Steps to reproduce this on 3.2.0: 1. On a 2 node cluster, create a volume with 1 brick on each node. 2. From node1, run gluster v rebalance <volname> fix-layout start 3. Once the fix-layout has completed, from node1, run gluster v remove-brick <volname> <brick on node2> start 4. On node1, run gluster v remove-brick <volname> <brick on node2> status This will print the fix-layout output. Running the command on node2 prints the output correctly. gluster v remove-brick <volname> <brick on node2> status Upstream patch at: http://review.gluster.org/15749 upstream mainline : http://review.gluster.org/15749 upstream 3.9 : http://review.gluster.org/#/c/15870/ Verified this BZ on glusterfs version 3.8.4-8.el7rhgs.x86_64. Below are the steps, 1) Created a distributed replica volume and started it. 2) FUSE mount the volume. 3) From node1, fix the layout by issuing "gluster volume rebalance <vol-name> fix-layout start". 4) After fixing the layout, from node 1 removed the peer subvol bricks. gluster v remove-brick <volname> <brick on node3> <brick on node4>start 5) From node 1, checked the remove-brick status and it is showing the correct remove-brick status output. Moving this BZ to Verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html |