Bug 1286088 - Rebalance : Stopping the volume wen rebalance is in progress gives the message : "Staging Failed"
Rebalance : Stopping the volume wen rebalance is in progress gives the messag...
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: distribute (Show other bugs)
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Nithya Balachandran
: ZStream
Depends On: 982928
  Show dependency treegraph
Reported: 2015-11-27 05:42 EST by Susant Kumar Palai
Modified: 2016-09-01 03:19 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 982928
Last Closed: 2016-09-01 03:19:58 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Comment 2 Prasad Desala 2016-09-01 01:15:28 EDT
This issue exists with glusterfs version 3.7.9-10.el7rhgs.x86_64. 
When I tried to stop a volume while rebalance is in progress, the command is throwing error "Staging failed".

Here are the steps that were performed,

1.Created a distributed replicated volume. 
2.Mounted the volume and filled up with some files. 
3.Added few bricks to the volume and started rebalance. 
4. While rebalance is in progress , stopped the volume. 

[root@dhcp43-185 ~]# gluster volume stop distrep
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: distrep: failed: Staging failed on dhcp43-57.lab.eng.blr.redhat.com. Error: rebalance session is in progress for the volume 'distrep'
Comment 3 Atin Mukherjee 2016-09-01 03:19:58 EDT
This is not a bug, while rebalance is in progress, you won't be able to stop the volume and the difference of error message depends on which node staging fails (it could very well be possible that rebalance has finished on originator node but not on others). I am closing this BZ.

Note You need to log in before you can comment on or make changes to this bug.