Bug 1289584 - brick_up_status in tests/volume.rc is not correct
brick_up_status in tests/volume.rc is not correct
Product: GlusterFS
Classification: Community
Component: tests (Show other bugs)
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Kaushal
: Reopened
Depends On:
  Show dependency treegraph
Reported: 2015-12-08 08:46 EST by Kaushal
Modified: 2016-06-16 09:49 EDT (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2016-06-16 09:49:19 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Kaushal 2015-12-08 08:46:15 EST
Description of problem:
The brick_up_status function wasn't correct after the introduction of the RDMA port in to `volume status` output.

This caused regression tests using this function to fail in my salt-stack config based regression environment. This should have also caused the tests to fail in the current jenkins environment, but I'm not sure why it kept passing there.
Comment 1 Vijay Bellur 2015-12-08 08:47:12 EST
REVIEW: http://review.gluster.org/12913 (tests: fix brick_up_status) posted (#1) for review on master by Kaushal M (kaushal@redhat.com)
Comment 2 Vijay Bellur 2015-12-09 07:18:28 EST
COMMIT: http://review.gluster.org/12913 committed in master by Kaleb KEITHLEY (kkeithle@redhat.com) 
commit ba73b0a25ecdf1c8476eead8105a8edc8031b31c
Author: Kaushal M <kaushal@redhat.com>
Date:   Tue Dec 8 19:06:24 2015 +0530

    tests: fix brick_up_status
    The brick_up_status function wasn't correct after the introduction of
    the RDMA port into the `volume status` output.
    It has been fixed to use the XML brick status of a specific brick
    instead of normal CLI output.
    Change-Id: I5327e1a32b1c6f326bc3def735d0daa9ea320074
    BUG: 1289584
    Signed-off-by: Kaushal M <kaushal@redhat.com>
    Reviewed-on: http://review.gluster.org/12913
    Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
Comment 3 Niels de Vos 2016-06-16 09:49:19 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.