Bug 1371772

Summary: Rebalance start is not checking all the volume bricks are up
Product: Red Hat Gluster Storage Reporter: Byreddy <bsrirama>
Component: distributeAssignee: Nithya Balachandran <nbalacha>
Status: CLOSED NOTABUG QA Contact: Prasad Desala <tdesala>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: amukherj, rhs-bugs, storage-qa-internal
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-11-14 03:50:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Byreddy 2016-08-31 05:19:44 UTC
Description of problem:
======================
Rebalance start is happening with out checking all the volume bricks are up
and rebalance status is showing failed, which is expected one.

Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.9-12.el7rhgs.x86_64


How reproducible:
=================
Always

Steps to Reproduce:
====================
1. Have one or two node cluster
2. Create a simple distribute volume using 2 bricks and start it
3. Fuse mount the volume
4. Kill one of the volume brick
5. write enough data on the mount point //untar the kernel
6. Now trigger the rebalance // gluster volume rebalance <vol-name>  start --> This will get succeeded
7. Check for rebalance status // will show failure, which is expected

Actual results:
===============
Rebalance start is happening when volume bricks are down


Expected results:
==================
Rebalance start should check all volume bricks are up and should through proper error message 


Additional info: