Bug 902214 - "volume status" for single brick fails if brick is not on the server where peer command was issued.
"volume status" for single brick fails if brick is not on the server where pe...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.0
Unspecified Unspecified
medium Severity unspecified
: ---
: ---
Assigned To: Raghavendra Talur
SATHEESARAN
:
Depends On: 888752
Blocks: 877961 882814 918453
  Show dependency treegraph
 
Reported: 2013-01-21 02:32 EST by Vidya Sakar
Modified: 2013-09-23 18:43 EDT (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0qa8
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 888752
Environment:
Last Closed: 2013-09-23 18:39:25 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Vidya Sakar 2013-01-21 02:32:21 EST
+++ This bug was initially created as a clone of Bug #888752 +++

For "volume status" commands, the source glusterd depends on some keys to be set in the context dictionary to modify and merge the replies sent by other peers. These keys are set in the commit-op on the source.  
But in "volume status" for a single brick, with the brick on another peer, the commit-op finishes without setting the required keys, which prevents the replies from other peers from being merged properly and causes the command to fail.

--- Additional comment from Vijay Bellur on 2012-12-27 02:40:56 EST ---

CHANGE: http://review.gluster.org/4347 (glusterd: "volume status" for remote brick fails on cli.) merged in master by Vijay Bellur (vbellur@redhat.com)
Comment 3 SATHEESARAN 2013-08-07 03:51:03 EDT
Verified with 3.4.0.17rhs-1

Followed steps as follows :
1. Created a distributed volume with a single brick on RHS Node1
2. Started the volume
3. Issued "gluster volume status" and "gluster volume info" on the created volume
from other nodes.

There were no issues.

Moving it to VERIFIED
Comment 4 Scott Haines 2013-09-23 18:39:25 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html
Comment 5 Scott Haines 2013-09-23 18:43:44 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.