Bug 1017020 - Volume rebalance starts even if few bricks are down in a distributed volume
Volume rebalance starts even if few bricks are down in a distributed volume
Status: CLOSED DEFERRED
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: distribute (Show other bugs)
2.1
Unspecified Linux
medium Severity high
: ---
: ---
Assigned To: Nithya Balachandran
Anoop
:
Depends On: 928646
Blocks: 1286186
  Show dependency treegraph
 
Reported: 2013-10-09 03:25 EDT by Ramesh N
Modified: 2015-11-27 07:21 EST (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1286186 (view as bug list)
Environment:
Last Closed: 2015-11-27 07:17:28 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ramesh N 2013-10-09 03:25:34 EDT
Description of problem: GlusterFS starts the rebalance task even when few bricks are down in a distributed volume. But when you see the rebalance status, it fails in all the nodes. This also happens for replicate/striped volumes when a complete set of bricks in a replica group is down. Similar can be seen in 'Remove Brick Start' asynchronous task.

How reproducible:
   Start rebalance on a distributed volume in which few brick processes are down.

Steps to Reproduce:
1. Create a volume with 3 bricks 
2. Kill one of the brick process
3. Start volume rebalance on the volume created

Actual results:

   Rebalance task starts with a task ID and if you check for the task/rebalance status, it will be failed from all the nodes.

Expected results:

   'Rebalance Start' should report an error without starting the rebalance task.

Additional info:

  Same thing reproducible in a replicated volume with all bricks of a replica group down. Similar behaviour observed in remove brick start (brick migration) use case.
Comment 2 Dusmant 2013-10-10 04:54:18 EDT
I am moving this bug priority to Urgent, because RHSC Corbett Rebalance feature would not be complete without this. I have discussed with Vivek and he is going to assign it to someone soon.

We need the fix in U2 branch by 18th Oct or the latest by 21st Oct.
Comment 3 Sayan Saha 2014-10-10 14:24:54 EDT
This is a bug and not a RFE. Making that change.

Note You need to log in before you can comment on or make changes to this bug.