Bug 1294615

Summary: While attaching the tier, the bricks are attached in a reverse order from the order in which the bricks are specified
Product: Red Hat Gluster Storage Reporter: spandura
Component: tierAssignee: hari gowtham <hgowtham>
Status: CLOSED WONTFIX QA Contact: Nag Pavan Chilakam <nchilaka>
Severity: medium Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: amukherj, pkarampu, ravishankar, rhs-bugs
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: tier-attach-detach
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-02-06 17:43:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description spandura 2015-12-29 09:55:23 UTC
Description of problem:
======================
When performing attach-tier , it will be good to create the subvolume/bricks in the same order the bricks are specified in the command.

For example:

gluster volume tier testvol attach replica 3  rhsauto019.lab.eng.blr.redhat.com:/bricks/brick2/testvol_tier0 rhsauto020.lab.eng.blr.redhat.com:/bricks/brick2/testvol_tier1 rhsauto021.lab.eng.blr.redhat.com:/bricks/brick1/testvol_tier2 rhsauto022.lab.eng.blr.redhat.com:/bricks/brick1/testvol_tier3 rhsauto019.lab.eng.blr.redhat.com:/bricks/brick3/testvol_tier4 rhsauto020.lab.eng.blr.redhat.com:/bricks/brick3/testvol_tier5 "

This would mean my first brick will be "rhsauto019.lab.eng.blr.redhat.com:/bricks/brick2/testvol_tier0", second brick is "rhsauto020.lab.eng.blr.redhat.com:/bricks/brick2/testvol_tier1 " and so on.

But when i see the volume info, my first brick in the hot-tier will be "rhsauto020.lab.eng.blr.redhat.com:/bricks/brick3/testvol_tier5" which is misleading. 

Volume Name: testvol
Type: Tier
Volume ID: 50b291c4-68ec-4b40-8ca3-cd2a1524f43f
Status: Started
Number of Bricks: 12
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 3 = 6
Brick1: rhsauto020.lab.eng.blr.redhat.com:/bricks/brick3/testvol_tier5
Brick2: rhsauto019.lab.eng.blr.redhat.com:/bricks/brick3/testvol_tier4
Brick3: rhsauto022.lab.eng.blr.redhat.com:/bricks/brick1/testvol_tier3
Brick4: rhsauto021.lab.eng.blr.redhat.com:/bricks/brick1/testvol_tier2
Brick5: rhsauto020.lab.eng.blr.redhat.com:/bricks/brick2/testvol_tier1
Brick6: rhsauto019.lab.eng.blr.redhat.com:/bricks/brick2/testvol_tier0
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 3 = 6
Brick7: rhsauto019.lab.eng.blr.redhat.com:/bricks/brick0/testvol_brick0
Brick8: rhsauto020.lab.eng.blr.redhat.com:/bricks/brick0/testvol_brick1
Brick9: rhsauto021.lab.eng.blr.redhat.com:/bricks/brick0/testvol_brick2
Brick10: rhsauto022.lab.eng.blr.redhat.com:/bricks/brick0/testvol_brick3
Brick11: rhsauto019.lab.eng.blr.redhat.com:/bricks/brick1/testvol_brick4
Brick12: rhsauto020.lab.eng.blr.redhat.com:/bricks/brick1/testvol_brick5
Options Reconfigured:
cluster.watermark-hi: 90
cluster.watermark-low: 75
cluster.tier-mode: cache
features.ctr-enabled: on
performance.readdir-ahead: on

Version-Release number of selected component (if applicable):
==============================================================
glusterfs-server-3.7.5-13.el7rhgs.x86_64

Comment 3 Ravishankar N 2017-05-15 10:25:36 UTC
I think this bug is an important to be fixed if we need to support arbiter with tiering. I am seeing many upstream queries on gluster-users as well asking for this combination.

Comment 5 Shyamsundar 2018-02-06 17:43:19 UTC
Thank you for your bug report.

We are no longer working on any improvements for Tier. This bug will be set to CLOSED WONTFIX to reflect this. Please reopen if the rfe is deemed critical.