Bug 1229251 - Data Tiering; Need to change volume info details like type of volume and number of bricks when tier is attached to a EC(disperse) volume
Summary: Data Tiering; Need to change volume info details like type of volume and numb...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
urgent
medium
Target Milestone: ---
: RHGS 3.1.0
Assignee: Bug Updates Notification Mailing List
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On: 1212019
Blocks: qe_tracker_everglades 1202842
TreeView+ depends on / blocked
 
Reported: 2015-06-08 10:31 UTC by Nag Pavan Chilakam
Modified: 2016-09-17 15:39 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1212019
Environment:
Last Closed: 2015-07-29 04:58:50 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Nag Pavan Chilakam 2015-06-08 10:31:44 UTC
+++ This bug was initially created as a clone of Bug #1212019 +++

Description of problem:
======================
When we attach a tier to an EC(disperse) volume, the number of bricks gets totally skewed.
For eg, when the EC volume was created it shows as below:
Number of Bricks: 6 x (8 + 4) = 12
But when we attach a replica pair for tier layer, it converts this blindly assuming all bricks are replica pairs as below:

Number of Bricks: 12 x 2 = 24

Also, the type should show as Tiered-Distributed-Disperse instead of just tier

Version-Release number of selected component (if applicable):
============================================================
[root@vertigo ~]# gluster --version
glusterfs 3.7dev built on Apr 13 2015 07:14:27
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@vertigo ~]# rpm -qa|grep gluster
glusterfs-server-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-rdma-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-regression-tests-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-resource-agents-3.7dev-0.994.gitf522001.el6.noarch
glusterfs-libs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-fuse-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-geo-replication-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-cli-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-extra-xlators-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-debuginfo-3.7dev-0.994.gitf522001.el6.x86_64


How reproducible:
================
easily


Steps to Reproduce:
===================
1.create a ec volume
[root@vertigo ~]# gluster v create rhatvol redundancy 4 vertigo:/rhs/brick1/rhatvol-1 ninja:/rhs/brick1/rhatvol-2 vertigo:/rhs/brick2/rhatvol-3 ninja:/rhs/brick2/rhatvol-4 vertigo:/rhs/brick3/rhatvol-5 ninja:/rhs/brick3/rhatvol-6 vertigo:/rhs/brick4/rhatvol-7 ninja:/rhs/brick4/rhatvol-8 vertigo:/rhs/brick1/rhatvol-9 ninja:/rhs/brick1/rhatvol-10 vertigo:/rhs/brick2/rhatvol-11 ninja:/rhs/brick2/rhatvol-12 force

2. issue a vol info as below:
Volume Name: rhatvol
Type: Disperse
Volume ID: e4594e70-9d75-47ce-b883-60d37cee989b
Status: Created
Number of Bricks: 1 x (8 + 4) = 12
Transport-type: tcp
Bricks:
Brick1: vertigo:/rhs/brick1/rhatvol-1
Brick2: ninja:/rhs/brick1/rhatvol-2
Brick3: vertigo:/rhs/brick2/rhatvol-3
Brick4: ninja:/rhs/brick2/rhatvol-4
Brick5: vertigo:/rhs/brick3/rhatvol-5
Brick6: ninja:/rhs/brick3/rhatvol-6
Brick7: vertigo:/rhs/brick4/rhatvol-7
Brick8: ninja:/rhs/brick4/rhatvol-8
Brick9: vertigo:/rhs/brick1/rhatvol-9
Brick10: ninja:/rhs/brick1/rhatvol-10
Brick11: vertigo:/rhs/brick2/rhatvol-11
Brick12: ninja:/rhs/brick2/rhatvol-12

 
3. Now attach a tier to this
Volume Name: rhatvol
Type: Tier
Volume ID: 7fe23f41-877a-4f37-a86a-5ea937bdf9d7
Status: Started
Number of Bricks: 12 x 2 = 24
Transport-type: tcp
Bricks:
Brick1: ninja:/rhs/brick1/testvol-tier
Brick2: vertigo:/rhs/brick1/testvol-tier
Brick3: vertigo:/rhs/brick1/testvol-1
Brick4: ninja:/rhs/brick1/testvol-2
Brick5: vertigo:/rhs/brick2/testvol-3
Brick6: ninja:/rhs/brick2/testvol-4
Brick7: vertigo:/rhs/brick3/testvol-5
Brick8: ninja:/rhs/brick3/testvol-6
Brick9: vertigo:/rhs/brick4/testvol-7
Brick10: ninja:/rhs/brick4/testvol-8
Brick11: vertigo:/rhs/brick1/testvol-9
Brick12: ninja:/rhs/brick1/testvol-10
Brick13: vertigo:/rhs/brick2/testvol-11
Brick14: ninja:/rhs/brick2/testvol-12
Brick15: interstellar:/rhs/brick1/testvol-11
Brick16: transformers:/rhs/brick1/testvol-12
Brick17: interstellar:/rhs/brick2/testvol-13
Brick18: transformers:/rhs/brick2/testvol-14
Brick19: interstellar:/rhs/brick1/testvol-15
Brick20: transformers:/rhs/brick1/testvol-16
Brick21: interstellar:/rhs/brick2/testvol-17
Brick22: transformers:/rhs/brick2/testvol-18
Brick23: interstellar:/rhs/brick1/testvol-19
Brick24: transformers:/rhs/brick1/testvol-20


It can be seen that number of bricks equaltion have changed to a wrong format
also, the volume type should show something like tier-disperse rather than just tier

Actual results:
==============


It can be seen that number of bricks equaltion have changed to a wrong format
also, the volume type should show something like tier-disperse rather than just tier

Expected results:
================
show the vol type as tier-disperse or tier-distributed-disperse 
also, show no. of bricks with ec equation and seperating out tier layer

--- Additional comment from Mohammed Rafi KC on 2015-04-23 07:19:54 EDT ---

upstream patch : http://review.gluster.org/#/c/10339/

--- Additional comment from Niels de Vos on 2015-05-15 09:07:27 EDT ---

This change should not be in "ON_QA", the patch posted for this bug is only available in the master branch and not in a release yet. Moving back to MODIFIED until there is an beta release for the next GlusterFS version.

Comment 3 Triveni Rao 2015-06-18 09:37:28 UTC
This bug has been verified with both types of EC volumes such as pure disperse and distributed-disperse volume.


this is with plane disperse vol.

[root@rhsqa14-vm3 ~]# gluster v create ec1 disperse-data 2 redundancy 1 10.70.47.159:/rhs/brick1/ec1 10.70.46.2:/rhs/brick1/ec1 10.70.47.159:/rhs/brick2/ec1 force
volume create: ec1: success: please start the volume to access data
[root@rhsqa14-vm3 ~]# gluster v start ec1
volume start: ec1: success
[root@rhsqa14-vm3 ~]# gluster v info
 
Volume Name: ec1
Type: Disperse
Volume ID: 088e6bb8-83e4-4304-85a8-79a4e292afd8
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/ec1
Brick2: 10.70.46.2:/rhs/brick1/ec1
Brick3: 10.70.47.159:/rhs/brick2/ec1
Options Reconfigured:
performance.readdir-ahead: on
 
Volume Name: ecvol
Type: Disperse
Volume ID: 140cf106-24e8-4c0b-8b87-75c4d361fdca
Status: Started
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/e0
Brick2: 10.70.46.2:/rhs/brick1/e0
Brick3: 10.70.47.159:/rhs/brick2/e0
Brick4: 10.70.46.2:/rhs/brick2/e0
Brick5: 10.70.47.159:/rhs/brick3/e0
Brick6: 10.70.46.2:/rhs/brick3/e0
Options Reconfigured:
performance.readdir-ahead: on
 
Volume Name: test
Type: Tier
Volume ID: 0b2070bd-eca0-4f1a-bdab-207a3f0a95f7
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.2:/rhs/brick3/t0
Brick2: 10.70.47.159:/rhs/brick3/t0
Cold Tier:
Cold Tier Type : Distribute
Number of Bricks: 4
Brick3: 10.70.47.159:/rhs/brick1/t0
Brick4: 10.70.46.2:/rhs/brick1/t0
Brick5: 10.70.47.159:/rhs/brick2/t0
Brick6: 10.70.46.2:/rhs/brick2/t0
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm3 ~]# gluster v info ec1
 
Volume Name: ec1
Type: Disperse
Volume ID: 088e6bb8-83e4-4304-85a8-79a4e292afd8
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/ec1
Brick2: 10.70.46.2:/rhs/brick1/ec1
Brick3: 10.70.47.159:/rhs/brick2/ec1
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# gluster v ec1 status
'unrecognized word: ec1 (position 1)
[root@rhsqa14-vm3 ~]# gluster v status ec1
Status of volume: ec1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.47.159:/rhs/brick1/ec1          49159     0          Y       27228
Brick 10.70.46.2:/rhs/brick1/ec1            49159     0          Y       16170
Brick 10.70.47.159:/rhs/brick2/ec1          49160     0          Y       27246
NFS Server on localhost                     2049      0          Y       27265
Self-heal Daemon on localhost               N/A       N/A        Y       27273
NFS Server on 10.70.46.2                    2049      0          Y       16189
Self-heal Daemon on 10.70.46.2              N/A       N/A        Y       16197
 
Task Status of Volume ec1
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# gluster v attach-tier ec1 10.70.47.159:/rhs/brick4/ec1 10.70.46.2:/rhs/brick4/ec1
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y

volume attach-tier: success
volume rebalance: ec1: success: Rebalance on ec1 has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: b25fb766-f9bf-4df2-a1ff-3d43c2f4faa5

[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# gluster v info ec1
 
Volume Name: ec1
Type: Tier
Volume ID: 088e6bb8-83e4-4304-85a8-79a4e292afd8
Status: Started
Number of Bricks: 5
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.2:/rhs/brick4/ec1
Brick2: 10.70.47.159:/rhs/brick4/ec1
Cold Tier:
Cold Tier Type : Disperse
Number of Bricks: 1 x (2 + 1) = 3
Brick3: 10.70.47.159:/rhs/brick1/ec1
Brick4: 10.70.46.2:/rhs/brick1/ec1
Brick5: 10.70.47.159:/rhs/brick2/ec1
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# 


=========================================================

this is with distributed-disperse vol.

[root@rhsqa14-vm3 ~]# gluster v create ec2 disperse-data 4 redundancy 2 10.70.47.159:/rhs/brick1/ec2 10.70.46.2:/rhs/brick1/ec2 10.70.47.159:/rhs/brick2/ec2 10.70.46.2:/rhs/brick2/ec2 10.70.47.159:/rhs/brick3/ec2 10.70.46.2:/rhs/brick3/ec2 force
volume create: ec2: success: please start the volume to access data
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# gluster v start ec2
volume start: ec2: success
[root@rhsqa14-vm3 ~]# gluster v info ec2
 
Volume Name: ec2
Type: Disperse
Volume ID: 6617f138-3f8b-4c21-99e2-bbdd25f98e70
Status: Started
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/ec2
Brick2: 10.70.46.2:/rhs/brick1/ec2
Brick3: 10.70.47.159:/rhs/brick2/ec2
Brick4: 10.70.46.2:/rhs/brick2/ec2
Brick5: 10.70.47.159:/rhs/brick3/ec2
Brick6: 10.70.46.2:/rhs/brick3/ec2
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm3 ~]# 



[root@rhsqa14-vm3 ~]# gluster v add-brick ec2 10.70.47.159:/rhs/brick4/ec2 10.70.46.2:/rhs/brick4/ec2 10.70.47.159:/rhs/brick5/ec2 10.70.46.2:/rhs/brick5/ec2 10.70.47.159:/rhs/brick6/ec2 10.70.46.2:/rhs/brick6/ec2
volume add-brick: success
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# gluster v info ec2
 
Volume Name: ec2
Type: Distributed-Disperse
Volume ID: 6617f138-3f8b-4c21-99e2-bbdd25f98e70
Status: Started
Number of Bricks: 2 x (4 + 2) = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/ec2
Brick2: 10.70.46.2:/rhs/brick1/ec2
Brick3: 10.70.47.159:/rhs/brick2/ec2
Brick4: 10.70.46.2:/rhs/brick2/ec2
Brick5: 10.70.47.159:/rhs/brick3/ec2
Brick6: 10.70.46.2:/rhs/brick3/ec2
Brick7: 10.70.47.159:/rhs/brick4/ec2
Brick8: 10.70.46.2:/rhs/brick4/ec2
Brick9: 10.70.47.159:/rhs/brick5/ec2
Brick10: 10.70.46.2:/rhs/brick5/ec2
Brick11: 10.70.47.159:/rhs/brick6/ec2
Brick12: 10.70.46.2:/rhs/brick6/ec2
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm3 ~]# 


[root@rhsqa14-vm3 ~]# gluster v attach-tier ec2 10.70.47.159:/rhs/brick6/ec2_0 10.70.46.2:/rhs/brick6/ec2_0
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: success
volume rebalance: ec2: success: Rebalance on ec2 has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: bb666fe1-475c-45a8-8256-b2a6ff9bffc6

[root@rhsqa14-vm3 ~]# gluster v info ec2
 
Volume Name: ec2
Type: Tier
Volume ID: 6617f138-3f8b-4c21-99e2-bbdd25f98e70
Status: Started
Number of Bricks: 14
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.2:/rhs/brick6/ec2_0
Brick2: 10.70.47.159:/rhs/brick6/ec2_0
Cold Tier:
Cold Tier Type : Distributed-Disperse
Number of Bricks: 2 x (4 + 2) = 12
Brick3: 10.70.47.159:/rhs/brick1/ec2
Brick4: 10.70.46.2:/rhs/brick1/ec2
Brick5: 10.70.47.159:/rhs/brick2/ec2
Brick6: 10.70.46.2:/rhs/brick2/ec2
Brick7: 10.70.47.159:/rhs/brick3/ec2
Brick8: 10.70.46.2:/rhs/brick3/ec2
Brick9: 10.70.47.159:/rhs/brick4/ec2
Brick10: 10.70.46.2:/rhs/brick4/ec2
Brick11: 10.70.47.159:/rhs/brick5/ec2
Brick12: 10.70.46.2:/rhs/brick5/ec2
Brick13: 10.70.47.159:/rhs/brick6/ec2
Brick14: 10.70.46.2:/rhs/brick6/ec2
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm3 ~]# 


[root@rhsqa14-vm3 ~]# rpm -qa | grep gluster
glusterfs-3.7.1-3.el6rhs.x86_64
glusterfs-cli-3.7.1-3.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-3.el6rhs.x86_64
glusterfs-libs-3.7.1-3.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-3.el6rhs.x86_64
glusterfs-fuse-3.7.1-3.el6rhs.x86_64
glusterfs-server-3.7.1-3.el6rhs.x86_64
glusterfs-rdma-3.7.1-3.el6rhs.x86_64
glusterfs-api-3.7.1-3.el6rhs.x86_64
glusterfs-debuginfo-3.7.1-3.el6rhs.x86_64
[root@rhsqa14-vm3 ~]#

Comment 4 errata-xmlrpc 2015-07-29 04:58:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.