Bug 982928 - Rebalance : Stopping the volume wen rebalance is in progress gives the message : "Staging Failed"
Summary: Rebalance : Stopping the volume wen rebalance is in progress gives the messag...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: 2.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Nithya Balachandran
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1286088
TreeView+ depends on / blocked
 
Reported: 2013-07-10 07:15 UTC by senaik
Modified: 2015-11-27 10:42 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1286088 (view as bug list)
Environment:
Last Closed: 2015-11-27 10:41:43 UTC
Embargoed:


Attachments (Terms of Use)

Description senaik 2013-07-10 07:15:28 UTC
Description of problem:
======================
Stopping volume when rebalance is in progress gives the message - Staging Failed on of the nodes 

Version-Release number of selected component (if applicable):
============================================================= 
3.4.0.12rhs.beta3-1.el6rhs.x86_64


How reproducible:


Steps to Reproduce:
==================
1.Create a 2x2 distributed replicated volume 

2.Mount the volume and fill up with some files 

3.Add 2 more bricks to the volume and start rebalance 

4. While rebalance is in progress , stop the volume 
gluster v stop vol7
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol7: failed: rebalance session is in progress for the volume 'vol7'

5 . After rebalance is completed , add 2 more bricks to the volume and start rebalance . While rebalance is in progress , stop the volume again . It gives the message that Staging has failed on one of the nodes 

gluster v stop vol7
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol7: failed: Staging failed on 10.70.34.85. Please check the log file for more details.
[root@fillmore ~]# gluster v stop vol7
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol7: failed: Staging failed on 10.70.34.85. Please check the log file for more details.

----------------------part of glusterd log----------------------

[2013-07-10 06:57:23.742154] E [glusterd-op-sm.c:3558:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Stop', Status : -1
[2013-07-10 06:57:29.084405] W [glusterd-volume-ops.c:1049:glusterd_op_stage_stop_volume] 0-management: rebalance session is in progress for the volume 'vol7'
[2013-07-10 06:57:29.084451] E [glusterd-op-sm.c:3558:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Stop', Status : -1
[2013-07-10 06:57:43.581011] W [glusterd-volume-ops.c:1049:glusterd_op_stage_stop_volume] 0-management: rebalance session is in progress for the volume 'vol7'
[2013-07-10 06:57:43.581053] E [glusterd-op-sm.c:3558:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Stop', Status : -1
[2013-07-10 06:58:12.774736] W [glusterd-volume-ops.c:1049:glusterd_op_stage_stop_volume] 0-management: rebalance session is in progress for the volume 'vol7'
[2013-07-10 06:58:12.774782] E [glusterd-op-sm.c:3558:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Stop', Status : -1
[2013-07-10 06:59:42.378945] I [glusterd-handshake.c:358:__server_event_notify] 0-: received defrag status updated
----------------------------------------------------------------------------

Actual results:
=============== 
Trying to stop the volume while rebalance is in progress gives the message : "Staging Failed " 


Expected results:
==================
While stopping the volume when rebalance session in progress , it should give the following message : 

volume stop: <vol_name>: failed: rebalance session is in progress for the volume '<vol_name>'


Additional info:
================= 
 gluster v i vol7
 
Volume Name: vol7
Type: Distributed-Replicate
Volume ID: 8fc74b04-0824-4b34-9b19-8d9c74272f81
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 10.70.34.85:/rhs/brick1/G1
Brick2: 10.70.34.105:/rhs/brick1/G2
Brick3: 10.70.34.86:/rhs/brick1/G3
Brick4: 10.70.34.85:/rhs/brick1/G4
Brick5: 10.70.34.85:/rhs/brick1/G5
Brick6: 10.70.34.105:/rhs/brick1/G6
Brick7: 10.70.34.85:/rhs/brick1/G7
Brick8: 10.70.34.105:/rhs/brick1/G8

------------------------------------------
gluster peer status
Number of Peers: 2

Hostname: 10.70.34.85
Uuid: f7f22764-80ee-45e9-a9ef-ba07f1e6348e
State: Peer in Cluster (Connected)

Hostname: 10.70.34.86
Uuid: 1d1c763f-c635-49a3-93ae-d7e7e17b58eb
State: Peer in Cluster (Connected)


mount point : /mnt/vol7


Note You need to log in before you can comment on or make changes to this bug.