Bug 1046879 - [Rebalance] : After doing add brick and checking rebalance status after restarting glusterd gives the error - "volume rebalance: <vol_name>: failed: error"
Summary: [Rebalance] : After doing add brick and checking rebalance status after resta...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: 2.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Nithya Balachandran
QA Contact: shylesh
URL:
Whiteboard:
Depends On:
Blocks: 1286153
TreeView+ depends on / blocked
 
Reported: 2013-12-27 08:32 UTC by senaik
Modified: 2015-11-27 12:08 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1286153 (view as bug list)
Environment:
Last Closed: 2015-11-27 12:08:33 UTC
Embargoed:


Attachments (Terms of Use)

Description senaik 2013-12-27 08:32:02 UTC
Description of problem:
========================
After doing add brick and checking rebalance status after restarting glusterd gives error - "volume rebalance: <vol_name>: failed: error" first time . However checking rebalance status after that gives the expected output.


Version-Release number of selected component (if applicable):
=============================================================
glusterfs 3.4.0.52rhs

How reproducible:
================
quite often 


Steps to Reproduce:
==================
1.Create a distribute replicate volume and start it 

2.Mount the volume and create some files 

3.Add 2 bricks and start rebalance 

4.Check rebalance status 

5.Add 2 more bricks and check rebalance status 

6.Restart glusterd and check rebalance status again 

[root@jay brick1]# service glusterd restart
Stopping glusterd:                                         [  OK  ]
Starting glusterd:                                         [  OK  ]

[root@jay brick1]# gluster v rebalance vol1 status
volume rebalance: vol1: failed: error

7.Check rebalance status again 

gluster v rebalance vol1 status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost               19        19.0MB            69             0             0            completed               1.00
                             10.70.34.88                0        0Bytes            53             0             7            completed               0.00
                             10.70.34.87                0        0Bytes            52             0             0            completed               0.00
                             10.70.34.89                0        0Bytes            53             0             0            completed               0.00
volume rebalance: vol1: success: 

Actual results:
===============
Checking rebalance status immediately after restarting glusterd gives error 
volume rebalance: <volP_name>: failed: error

Expected results:
================
After restarting glusterd and checking rebalance status should not give error 


Additional info:

Comment 4 Susant Kumar Palai 2015-11-27 12:08:33 UTC
Cloning this to 3.1. to be fixed in future release.


Note You need to log in before you can comment on or make changes to this bug.