Bug 902214

Summary: "volume status" for single brick fails if brick is not on the server where peer command was issued.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Vidya Sakar <vinaraya>
Component: glusterdAssignee: Raghavendra Talur <rtalur>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: unspecified Docs Contact:
Priority: medium    
Version: 2.0CC: amarts, gluster-bugs, kaushal, rfortier, rhs-bugs, sdharane, shaines, shtripat, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.4.0qa8 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 888752 Environment:
Last Closed: 2013-09-23 22:39:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 888752    
Bug Blocks: 877961, 882814, 918453    

Description Vidya Sakar 2013-01-21 07:32:21 UTC
+++ This bug was initially created as a clone of Bug #888752 +++

For "volume status" commands, the source glusterd depends on some keys to be set in the context dictionary to modify and merge the replies sent by other peers. These keys are set in the commit-op on the source.  
But in "volume status" for a single brick, with the brick on another peer, the commit-op finishes without setting the required keys, which prevents the replies from other peers from being merged properly and causes the command to fail.

--- Additional comment from Vijay Bellur on 2012-12-27 02:40:56 EST ---

CHANGE: http://review.gluster.org/4347 (glusterd: "volume status" for remote brick fails on cli.) merged in master by Vijay Bellur (vbellur)

Comment 3 SATHEESARAN 2013-08-07 07:51:03 UTC
Verified with 3.4.0.17rhs-1

Followed steps as follows :
1. Created a distributed volume with a single brick on RHS Node1
2. Started the volume
3. Issued "gluster volume status" and "gluster volume info" on the created volume
from other nodes.

There were no issues.

Moving it to VERIFIED

Comment 4 Scott Haines 2013-09-23 22:39:25 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Comment 5 Scott Haines 2013-09-23 22:43:44 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html