Bug 1371772 - Rebalance start is not checking all the volume bricks are up
Summary: Rebalance start is not checking all the volume bricks are up
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: distribute
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Nithya Balachandran
QA Contact: Prasad Desala
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-31 05:19 UTC by Byreddy
Modified: 2016-11-14 03:50 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-14 03:50:31 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Byreddy 2016-08-31 05:19:44 UTC
Description of problem:
======================
Rebalance start is happening with out checking all the volume bricks are up
and rebalance status is showing failed, which is expected one.

Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.9-12.el7rhgs.x86_64


How reproducible:
=================
Always

Steps to Reproduce:
====================
1. Have one or two node cluster
2. Create a simple distribute volume using 2 bricks and start it
3. Fuse mount the volume
4. Kill one of the volume brick
5. write enough data on the mount point //untar the kernel
6. Now trigger the rebalance // gluster volume rebalance <vol-name>  start --> This will get succeeded
7. Check for rebalance status // will show failure, which is expected

Actual results:
===============
Rebalance start is happening when volume bricks are down


Expected results:
==================
Rebalance start should check all volume bricks are up and should through proper error message 


Additional info:


Note You need to log in before you can comment on or make changes to this bug.