<snip>
How reproducible:
Observed once.
Steps to Reproduce:
1. Create a distributed-replicate volume ( 2x2, with one brick on each server in a 4-server cluster ), start and mount, create data on mount point.
2. Kill glusterd on node1 and node2 ( these hold bricks that form one replica pair )
3. Start remove-brick on the volume.
4. Start glusterd on node1 and node2.
5. Run 'gluster volume status' command for that volume on any of the nodes.
Actual results:
volume status command fails with the above described message.
Expected results:
volume status command should not fail.
</snip>
Byreddy,
Could you test this behaviour with RHGS 3.1.2 ( nightly ) ?
(In reply to SATHEESARAN from comment #2)
> <snip>
> How reproducible:
> Observed once.
>
> Steps to Reproduce:
> 1. Create a distributed-replicate volume ( 2x2, with one brick on each
> server in a 4-server cluster ), start and mount, create data on mount point.
> 2. Kill glusterd on node1 and node2 ( these hold bricks that form one
> replica pair )
> 3. Start remove-brick on the volume.
> 4. Start glusterd on node1 and node2.
> 5. Run 'gluster volume status' command for that volume on any of the nodes.
>
> Actual results:
> volume status command fails with the above described message.
>
> Expected results:
> volume status command should not fail.
>
> </snip>
>
> Byreddy,
>
> Could you test this behaviour with RHGS 3.1.2 ( nightly ) ?
No more seeing issue mentioned above with 3.1.2 build (3.7.5-8)