Bug 1294497

Summary: gluster volume status xml output of tiered volume has all the common services tagged under <coldBricks>
Product: [Community] GlusterFS Reporter: hari gowtham <hgowtham>
Component: tieringAssignee: hari gowtham <hgowtham>
Status: CLOSED CURRENTRELEASE QA Contact: bugs <bugs>
Severity: low Docs Contact:
Priority: unspecified    
Version: mainlineCC: aloganat, bugs, dlambrig, hgowtham, nchilaka, rhs-bugs, rkavunga
Target Milestone: ---Keywords: Triaged, ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.8rc2 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1278394
: 1318505 (view as bug list) Environment:
Last Closed: 2016-06-16 13:52:41 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1272318, 1278394    
Bug Blocks: 1318505    

Comment 1 Vijay Bellur 2015-12-28 13:50:35 UTC
REVIEW: http://review.gluster.org/13101 (Cli/tier: separating services from cold bricks in xml) posted (#1) for review on master by hari gowtham (hari.gowtham005)

Comment 2 Vijay Bellur 2015-12-30 05:31:54 UTC
REVIEW: http://review.gluster.org/13101 (Cli/tier: separating services from cold bricks in xml) posted (#2) for review on master by hari gowtham (hari.gowtham005)

Comment 3 Vijay Bellur 2015-12-30 05:35:11 UTC
REVIEW: http://review.gluster.org/13101 (Cli/tier: separating services from cold bricks in xml) posted (#3) for review on master by hari gowtham (hari.gowtham005)

Comment 4 Vijay Bellur 2016-01-05 06:06:46 UTC
REVIEW: http://review.gluster.org/13101 (Cli/tier: separating services from cold bricks in xml) posted (#4) for review on master by hari gowtham (hari.gowtham005)

Comment 5 Vijay Bellur 2016-01-08 05:44:40 UTC
REVIEW: http://review.gluster.org/13101 (Cli/tier: separating services from cold bricks in xml) posted (#5) for review on master by hari gowtham (hari.gowtham005)

Comment 6 Vijay Bellur 2016-01-11 12:37:49 UTC
REVIEW: http://review.gluster.org/13101 (Cli/tier: separating services from cold bricks in xml) posted (#6) for review on master by hari gowtham (hari.gowtham005)

Comment 7 Vijay Bellur 2016-02-29 09:36:34 UTC
REVIEW: http://review.gluster.org/13101 (Cli/tier: separating services from cold bricks in xml) posted (#7) for review on master by hari gowtham (hari.gowtham005)

Comment 8 Vijay Bellur 2016-03-01 13:01:10 UTC
REVIEW: http://review.gluster.org/13101 (Cli/tier: separating services from cold bricks in xml) posted (#8) for review on master by hari gowtham (hari.gowtham005)

Comment 9 Vijay Bellur 2016-03-03 09:41:11 UTC
REVIEW: http://review.gluster.org/13101 (Cli/tier: separating services from cold bricks in xml) posted (#9) for review on master by hari gowtham (hari.gowtham005)

Comment 10 Vijay Bellur 2016-03-04 10:13:14 UTC
REVIEW: http://review.gluster.org/13101 (Cli/tier: separating services from cold bricks in xml) posted (#10) for review on master by hari gowtham (hari.gowtham005)

Comment 11 Vijay Bellur 2016-03-11 06:50:34 UTC
REVIEW: http://review.gluster.org/13101 (Cli/tier: separating services from cold bricks in xml) posted (#11) for review on master by hari gowtham (hari.gowtham005)

Comment 12 Vijay Bellur 2016-03-16 08:27:01 UTC
COMMIT: http://review.gluster.org/13101 committed in master by Dan Lambright (dlambrig) 
------
commit 1249030962a177d077e76d346d66ef6061b818ed
Author: hari <hgowtham>
Date:   Mon Dec 28 16:04:50 2015 +0530

    Cli/tier: separating services from cold bricks in xml
    
    fix: The cold bricks tag included the processes also.
    The patch has removed the processes from being mentioned
    inside the cold brick tag and are mentioned below by
    closing the cold brick tag after the brick count.
    
    Previous output:
            <coldBricks>
              <node>
                <hostname>192.168.1.102</hostname>
                <path>/data/gluster/b3</path>
                <peerid>8c088528-e1aee3b2b40f</peerid>
                <status>1</status>
                <port>49157</port>
                <ports>
                  <tcp>49157</tcp>
                  <rdma>N/A</rdma>
                </ports>
                <pid>1160</pid>
              </node>
              <node>
                <hostname>NFS Server</hostname>
                <path>localhost</path>
                <peerid>8c088528-e1aee3b2b40f</peerid>
                <status>0</status>
                <port>N/A</port>
                <ports>
                  <tcp>N/A</tcp>
                  <rdma>N/A</rdma>
                </ports>
                <pid>-1</pid>
              </node>
            </coldBricks>
    
    Expected output:
            <coldBricks>
              <node>
                <hostname>192.168.1.102</hostname>
                <path>/data/gluster/b3</path>
                <peerid>8c088528-e1aee3b2b40f</peerid>
                <status>1</status>
                <port>49157</port>
                <ports>
                  <tcp>49157</tcp>
                  <rdma>N/A</rdma>
                </ports>
                <pid>1160</pid>
              </node>
            </coldBricks>
            <node>
              <hostname>NFS Server</hostname>
              <path>localhost</path>
              <peerid>8c088528-e1aee3b2b40f</peerid>
              <status>0</status>
              <port>N/A</port>
              <ports>
                <tcp>N/A</tcp>
                <rdma>N/A</rdma>
              </ports>
              <pid>-1</pid>
            </node>
    
    Change-Id: Ieccd017d7b2edb16786323f1a76402f020bdfb0d
    BUG: 1294497
    Signed-off-by: hari <hgowtham>
    Reviewed-on: http://review.gluster.org/13101
    Smoke: Gluster Build System <jenkins.com>
    Tested-by: hari gowtham <hari.gowtham005>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Dan Lambright <dlambrig>

Comment 13 Niels de Vos 2016-06-16 13:52:41 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user