Bug 1268822

Summary: tier/cli: number of bricks remains the same in v info --xml
Product: [Community] GlusterFS Reporter: hari gowtham <hgowtham>
Component: tieringAssignee: hari gowtham <hgowtham>
Status: CLOSED CURRENTRELEASE QA Contact: bugs <bugs>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.8rc2 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1269344 1271648 (view as bug list) Environment:
Last Closed: 2016-06-16 13:39:34 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1269344, 1271648    

Description hari gowtham 2015-10-05 11:43:25 UTC
Description of problem:
the number of bricks remain one for n number of bricks in the coldtype.

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1.create a gluster tier volume with more than one brick in cold type
2.issue gluster v info --xml
3.

Actual results:
<coldBricks>
            <coldBrickType>Replicate</coldBrickType>
            <numberOfBricks>1 x 2 = 2</numberOfBricks>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b1_1<name>10.70.42.203:/data/gluster/tier/b1_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b1_2<name>10.70.42.203:/data/gluster/tier/b1_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b2_1<name>10.70.42.203:/data/gluster/tier/b2_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b2_2<name>10.70.42.203:/data/gluster/tier/b2_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b3_1<name>10.70.42.203:/data/gluster/tier/b3_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b3_2<name>10.70.42.203:/data/gluster/tier/b3_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
          </coldBricks>


Expected results:
 <coldBricks>
            <coldBrickType>Distributed-Replicate</coldBrickType>
            <numberOfBricks>3 x 2 = 6</numberOfBricks>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b1_1<name>10.70.42.203:/data/gluster/tier/b1_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b1_2<name>10.70.42.203:/data/gluster/tier/b1_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b2_1<name>10.70.42.203:/data/gluster/tier/b2_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b2_2<name>10.70.42.203:/data/gluster/tier/b2_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b3_1<name>10.70.42.203:/data/gluster/tier/b3_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b3_2<name>10.70.42.203:/data/gluster/tier/b3_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
          </coldBricks>


Additional info:

Comment 1 Niels de Vos 2016-06-16 13:39:34 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user