Bug 1289584 - brick_up_status in tests/volume.rc is not correct
Summary: brick_up_status in tests/volume.rc is not correct
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tests
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kaushal
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-12-08 13:46 UTC by Kaushal
Modified: 2016-06-16 13:49 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.8rc2
Clone Of:
Environment:
Last Closed: 2016-06-16 13:49:19 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kaushal 2015-12-08 13:46:15 UTC
Description of problem:
The brick_up_status function wasn't correct after the introduction of the RDMA port in to `volume status` output.

This caused regression tests using this function to fail in my salt-stack config based regression environment. This should have also caused the tests to fail in the current jenkins environment, but I'm not sure why it kept passing there.

Comment 1 Vijay Bellur 2015-12-08 13:47:12 UTC
REVIEW: http://review.gluster.org/12913 (tests: fix brick_up_status) posted (#1) for review on master by Kaushal M (kaushal)

Comment 2 Vijay Bellur 2015-12-09 12:18:28 UTC
COMMIT: http://review.gluster.org/12913 committed in master by Kaleb KEITHLEY (kkeithle) 
------
commit ba73b0a25ecdf1c8476eead8105a8edc8031b31c
Author: Kaushal M <kaushal>
Date:   Tue Dec 8 19:06:24 2015 +0530

    tests: fix brick_up_status
    
    The brick_up_status function wasn't correct after the introduction of
    the RDMA port into the `volume status` output.
    
    It has been fixed to use the XML brick status of a specific brick
    instead of normal CLI output.
    
    Change-Id: I5327e1a32b1c6f326bc3def735d0daa9ea320074
    BUG: 1289584
    Signed-off-by: Kaushal M <kaushal>
    Reviewed-on: http://review.gluster.org/12913
    Reviewed-by: Kaleb KEITHLEY <kkeithle>
    Tested-by: Gluster Build System <jenkins.com>

Comment 3 Niels de Vos 2016-06-16 13:49:19 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.