Bug 1289584

Summary: brick_up_status in tests/volume.rc is not correct
Product: [Community] GlusterFS Reporter: Kaushal <kaushal>
Component: testsAssignee: Kaushal <kaushal>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, rtalur
Target Milestone: ---Keywords: Reopened
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.8rc2 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-06-16 13:49:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Kaushal 2015-12-08 13:46:15 UTC
Description of problem:
The brick_up_status function wasn't correct after the introduction of the RDMA port in to `volume status` output.

This caused regression tests using this function to fail in my salt-stack config based regression environment. This should have also caused the tests to fail in the current jenkins environment, but I'm not sure why it kept passing there.

Comment 1 Vijay Bellur 2015-12-08 13:47:12 UTC
REVIEW: http://review.gluster.org/12913 (tests: fix brick_up_status) posted (#1) for review on master by Kaushal M (kaushal)

Comment 2 Vijay Bellur 2015-12-09 12:18:28 UTC
COMMIT: http://review.gluster.org/12913 committed in master by Kaleb KEITHLEY (kkeithle) 
------
commit ba73b0a25ecdf1c8476eead8105a8edc8031b31c
Author: Kaushal M <kaushal>
Date:   Tue Dec 8 19:06:24 2015 +0530

    tests: fix brick_up_status
    
    The brick_up_status function wasn't correct after the introduction of
    the RDMA port into the `volume status` output.
    
    It has been fixed to use the XML brick status of a specific brick
    instead of normal CLI output.
    
    Change-Id: I5327e1a32b1c6f326bc3def735d0daa9ea320074
    BUG: 1289584
    Signed-off-by: Kaushal M <kaushal>
    Reviewed-on: http://review.gluster.org/12913
    Reviewed-by: Kaleb KEITHLEY <kkeithle>
    Tested-by: Gluster Build System <jenkins.com>

Comment 3 Niels de Vos 2016-06-16 13:49:19 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user