Bug 1285170
Summary: | glusterd: cli is showing command success for rebalance commands(command which uses op_sm framework) even though staging is failed in follower node. | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Anand Nekkunti <anekkunt> | |
Component: | glusterd | Assignee: | Atin Mukherjee <amukherj> | |
Status: | CLOSED ERRATA | QA Contact: | Bala Konda Reddy M <bmekala> | |
Severity: | unspecified | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.1 | CC: | amukherj, asrivast, nlevinki, rhinduja, sasundar, smohan, vbellur | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 3.3.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.8.4-19 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1287027 (view as bug list) | Environment: | ||
Last Closed: | 2017-09-21 04:25:52 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1287027, 1417147 |
Description
Anand Nekkunti
2015-11-25 07:03:43 UTC
Anand, Could you provide exact steps to reproduce this issue ? (In reply to SATHEESARAN from comment #2) > Anand, > > Could you provide exact steps to reproduce this issue ? Really I don't know how to re-produce .. In code i have modified ( In glusterd_op_stage_rebalance() i was returning -1 ) and installed one of the the node. Upstream patch: http://review.gluster.org/#/c/12836/ (In reply to SATHEESARAN from comment #2) > Anand, > > Could you provide exact steps to reproduce this issue ? Steps to re-produce: 1. create 2 node cluster(host1 and host2): #gluster peer probe host2 2. create distribute volume gluster vol create VOL host1:/tmp/B1 host2:/tmp/B2 3. mount the volume and copy some files mount -t glusterfs host1:/VOL /mnt cp -rf glusterfs /mnt 4. execute rebalance start command in host1 and kill re-balance process 1. gluster vol rebalance VOL start 2. grep for re-balnce process and kill it 5. Again execute rebalance start command in host1 Expected behavior: rebalance already started Actual result: re-balance success Note: same thing we can re-produce for tiering volume start This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions The patch http://review.gluster.org/#/c/12836/ has been already made into rhgs-3.2.0 code base and hence moving the status to Modified. verified in 3.8.4-24 Followed the steps in comment 5. Started the rebalance and killed the rebalance pid. Once again started the rebalance. the result is as expected -"Rebalance on first is already started" [root@dhcp37-135 brick1]# gluster vol rebalance first start volume rebalance: first: failed: Rebalance on first is already started Hence marking the bz as verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 |