Bug 1269344 - tier/cli: number of bricks remains the same in v info --xml
Summary: tier/cli: number of bricks remains the same in v info --xml
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: 3.7.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: hari gowtham
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On: 1268822 1271648
Blocks: glusterfs-3.7.6
TreeView+ depends on / blocked
 
Reported: 2015-10-07 05:49 UTC by hari gowtham
Modified: 2015-11-17 05:59 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.7.6
Doc Type: Bug Fix
Doc Text:
Clone Of: 1268822
Environment:
Last Closed: 2015-11-17 05:59:41 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description hari gowtham 2015-10-07 05:49:08 UTC
+++ This bug was initially created as a clone of Bug #1268822 +++

Description of problem:
the number of bricks remain one for n number of bricks in the coldtype.

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1.create a gluster tier volume with more than one brick in cold type
2.issue gluster v info --xml
3.

Actual results:
<coldBricks>
            <coldBrickType>Replicate</coldBrickType>
            <numberOfBricks>1 x 2 = 2</numberOfBricks>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b1_1<name>10.70.42.203:/data/gluster/tier/b1_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b1_2<name>10.70.42.203:/data/gluster/tier/b1_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b2_1<name>10.70.42.203:/data/gluster/tier/b2_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b2_2<name>10.70.42.203:/data/gluster/tier/b2_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b3_1<name>10.70.42.203:/data/gluster/tier/b3_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b3_2<name>10.70.42.203:/data/gluster/tier/b3_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
          </coldBricks>


Expected results:
 <coldBricks>
            <coldBrickType>Distributed-Replicate</coldBrickType>
            <numberOfBricks>3 x 2 = 6</numberOfBricks>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b1_1<name>10.70.42.203:/data/gluster/tier/b1_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b1_2<name>10.70.42.203:/data/gluster/tier/b1_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b2_1<name>10.70.42.203:/data/gluster/tier/b2_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b2_2<name>10.70.42.203:/data/gluster/tier/b2_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b3_1<name>10.70.42.203:/data/gluster/tier/b3_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b3_2<name>10.70.42.203:/data/gluster/tier/b3_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
          </coldBricks>


Additional info:

Comment 1 Raghavendra Talur 2015-11-17 05:59:41 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.6, please open a new bug report.

glusterfs-3.7.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-November/024359.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.