Bug 1285170 - glusterd: cli is showing command success for rebalance commands(command which uses op_sm framework) even though staging is failed in follower node.
glusterd: cli is showing command success for rebalance commands(command which...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
3.1
Unspecified Unspecified
unspecified Severity unspecified
: ---
: RHGS 3.3.0
Assigned To: Atin Mukherjee
Bala Konda Reddy M
: ZStream
Depends On:
Blocks: 1287027 1417147
  Show dependency treegraph
 
Reported: 2015-11-25 02:03 EST by Anand Nekkunti
Modified: 2017-09-21 00:53 EDT (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-3.8.4-19
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1287027 (view as bug list)
Environment:
Last Closed: 2017-09-21 00:25:52 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Anand Nekkunti 2015-11-25 02:03:43 EST
Description of problem:
If we run rebalance start command, cli is printing command success even if staging fail in follower node.

Actual results:
        cli displaying command success

Expected results:
         command should fail.
Comment 2 SATHEESARAN 2015-12-03 11:34:59 EST
Anand,

Could you provide exact steps to reproduce this issue ?
Comment 3 Anand Nekkunti 2015-12-04 01:42:26 EST
(In reply to SATHEESARAN from comment #2)
> Anand,
> 
> Could you provide exact steps to reproduce this issue ?

Really I don't know how to re-produce .. In code i have modified ( In glusterd_op_stage_rebalance() i was returning -1  )  and installed one of the the node.
Comment 4 Anand Nekkunti 2015-12-04 01:57:25 EST
Upstream patch: http://review.gluster.org/#/c/12836/
Comment 5 Anand Nekkunti 2015-12-04 02:32:22 EST
(In reply to SATHEESARAN from comment #2)
> Anand,
> 
> Could you provide exact steps to reproduce this issue ?

Steps to re-produce:

1. create 2 node cluster(host1 and host2): 
     
    #gluster peer probe host2

2. create distribute volume 
      gluster vol create VOL host1:/tmp/B1  host2:/tmp/B2

3. mount the volume and copy some files 
              mount -t glusterfs  host1:/VOL   /mnt
              cp -rf glusterfs /mnt  
       
4. execute rebalance start command in host1 and kill re-balance process 
       1. gluster vol rebalance VOL start
       2. grep for re-balnce process and kill it

5. Again execute rebalance start command in host1  

Expected behavior:
     rebalance already started
Actual result:
     re-balance success 

Note:
same thing we can re-produce for tiering volume start
Comment 7 Mike McCune 2016-03-28 19:26:37 EDT
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune@redhat.com with any questions
Comment 8 Atin Mukherjee 2017-02-08 08:49:14 EST
The patch http://review.gluster.org/#/c/12836/ has been already made into rhgs-3.2.0 code base and hence moving the status to Modified.
Comment 11 Bala Konda Reddy M 2017-05-06 02:52:33 EDT
verified in 3.8.4-24

Followed the steps in comment 5. 

Started the rebalance and killed the rebalance pid. Once again started the rebalance. the result is as expected -"Rebalance on first is already started"

[root@dhcp37-135 brick1]# gluster vol rebalance first start
volume rebalance: first: failed: Rebalance on first is already started

Hence marking the bz as verified
Comment 13 errata-xmlrpc 2017-09-21 00:25:52 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774
Comment 14 errata-xmlrpc 2017-09-21 00:53:56 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774

Note You need to log in before you can comment on or make changes to this bug.