Bug 1286146

Summary: Remove-brick : 'gluster volume status' command fails when glusterd is killed before starting remove-brick and then brought back up.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Susant Kumar Palai <spalai>
Component: distributeAssignee: Raghavendra G <rgowdapp>
Status: CLOSED NOTABUG QA Contact: Anoop <annair>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: bsrirama, grajaiya, knarra, mmahoney, mmccune, rhs-bugs, sasundar, sdharane, spalai, ssampat, storage-qa-internal, vbellur
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1048765 Environment:
Last Closed: 2015-12-04 08:13:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1048765    
Bug Blocks:    

Comment 2 SATHEESARAN 2015-12-03 15:33:48 UTC
<snip>
How reproducible:
Observed once.

Steps to Reproduce:
1. Create a distributed-replicate volume ( 2x2, with one brick on each server in a 4-server cluster ), start and mount, create data on mount point.
2. Kill glusterd on node1 and node2 ( these hold bricks that form one replica pair )
3. Start remove-brick on the volume.
4. Start glusterd on node1 and node2.
5. Run 'gluster volume status' command for that volume on any of the nodes. 

Actual results:
volume status command fails with the above described message.

Expected results:
volume status command should not fail.

</snip>

Byreddy,

Could you test this behaviour with RHGS 3.1.2 ( nightly ) ?

Comment 3 Byreddy 2015-12-04 06:49:40 UTC
(In reply to SATHEESARAN from comment #2)
> <snip>
> How reproducible:
> Observed once.
> 
> Steps to Reproduce:
> 1. Create a distributed-replicate volume ( 2x2, with one brick on each
> server in a 4-server cluster ), start and mount, create data on mount point.
> 2. Kill glusterd on node1 and node2 ( these hold bricks that form one
> replica pair )
> 3. Start remove-brick on the volume.
> 4. Start glusterd on node1 and node2.
> 5. Run 'gluster volume status' command for that volume on any of the nodes. 
> 
> Actual results:
> volume status command fails with the above described message.
> 
> Expected results:
> volume status command should not fail.
> 
> </snip>
> 
> Byreddy,
> 
> Could you test this behaviour with RHGS 3.1.2 ( nightly ) ?

No more seeing issue mentioned above  with 3.1.2 build (3.7.5-8)

Comment 4 SATHEESARAN 2015-12-04 08:13:56 UTC
With comment3, closing this bug for RHGS 3.1.2