Bug 1224239 - [Data Tiering] : Attaching a replica 2 hot tier to a replica 3 volume changes the volume topology to nx2 - causing inconsistent data between bricks in the replica set
Summary: [Data Tiering] : Attaching a replica 2 hot tier to a replica 3 volume changes...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: RHGS 3.1.0
Assignee: Mohammed Rafi KC
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On: 1215152
Blocks: qe_tracker_everglades 1202842
TreeView+ depends on / blocked
 
Reported: 2015-05-22 11:16 UTC by senaik
Modified: 2016-09-17 15:36 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.7.1-1
Doc Type: Bug Fix
Doc Text:
Clone Of: 1215152
Environment:
Last Closed: 2015-07-29 04:51:47 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Comment 3 Nag Pavan Chilakam 2015-06-15 12:30:02 UTC
1)created tiered volumes with following configurations:

Hot tier            Cold tier
=========          ===========
3way replica       ECvolume(1x[4+2])
3 way replica      2 way replica
2 way replica      3 way replica

All the above scenarios were working and showing up good

[root@rhsqa14-vm4 ~]# gluster v info
 
Volume Name: distrep
Type: Tier
Volume ID: 8f74266f-de75-4cfe-a63d-e2b08a7e3057
Status: Started
Number of Bricks: 10
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 3 = 6
Brick1: 10.70.46.2:/rhs/brick6/distrep
Brick2: 10.70.47.159:/rhs/brick6/distrep
Brick3: 10.70.46.2:/rhs/brick5/distrep
Brick4: 10.70.47.159:/rhs/brick5/distrep
Brick5: 10.70.46.2:/rhs/brick4/distrep
Brick6: 10.70.47.159:/rhs/brick4/distrep
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick7: 10.70.47.159:/rhs/brick1/distrep
Brick8: 10.70.46.2:/rhs/brick1/distrep
Brick9: 10.70.47.159:/rhs/brick2/distrep
Brick10: 10.70.46.2:/rhs/brick2/distrep
Options Reconfigured:
performance.readdir-ahead: on
 
Volume Name: ecvol
Type: Tier
Volume ID: f25eecf7-a83e-478c-b73d-1d954d7a78fe
Status: Started
Number of Bricks: 12
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 3 = 6
Brick1: 10.70.46.2:/rhs/brick6/ecvol
Brick2: 10.70.47.159:/rhs/brick6/ecvol
Brick3: 10.70.46.2:/rhs/brick5/ecvol
Brick4: 10.70.47.159:/rhs/brick5/ecvol
Brick5: 10.70.46.2:/rhs/brick4/ecvol
Brick6: 10.70.47.159:/rhs/brick4/ecvol
Cold Tier:
Cold Tier Type : Disperse
Number of Bricks: 1 x (4 + 2) = 6
Brick7: 10.70.47.159:/rhs/brick1/ecvol
Brick8: 10.70.46.2:/rhs/brick1/ecvol
Brick9: 10.70.47.159:/rhs/brick2/ecvol
Brick10: 10.70.46.2:/rhs/brick2/ecvol
Brick11: 10.70.46.2:/rhs/brick3/ecvol
Brick12: 10.70.47.159:/rhs/brick3/ecvol
Options Reconfigured:
cluster.tier-promote-frequency: 6
cluster.tier-demote-frequency: 5
features.record-counters: on
features.ctr-enabled: on
performance.readdir-ahead: on
 
Volume Name: replica3
Type: Tier
Volume ID: 5aa307c6-6a1b-4962-abae-12cf1f502d40
Status: Started
Number of Bricks: 12
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 3 x 2 = 6
Brick1: 10.70.46.2:/rhs/brick6/replica3
Brick2: 10.70.47.159:/rhs/brick6/replica3
Brick3: 10.70.46.2:/rhs/brick5/replica3
Brick4: 10.70.47.159:/rhs/brick5/replica3
Brick5: 10.70.46.2:/rhs/brick4/replica3
Brick6: 10.70.47.159:/rhs/brick4/replica3
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 3 = 6
Brick7: 10.70.47.159:/rhs/brick1/replica3
Brick8: 10.70.46.2:/rhs/brick1/replica3
Brick9: 10.70.47.159:/rhs/brick2/replica3
Brick10: 10.70.46.2:/rhs/brick2/replica3
Brick11: 10.70.47.159:/rhs/brick3/replica3
Brick12: 10.70.46.2:/rhs/brick3/replica3
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm4 ~]# gluster --version
glusterfs 3.7.1 built on Jun 14 2015 21:55:41
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@rhsqa14-vm4 ~]# rpm -qa|grep gluster
glusterfs-libs-3.7.1-3.el6rhs.x86_64
glusterfs-fuse-3.7.1-3.el6rhs.x86_64
glusterfs-3.7.1-3.el6rhs.x86_64
glusterfs-api-3.7.1-3.el6rhs.x86_64
glusterfs-cli-3.7.1-3.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-3.el6rhs.x86_64
glusterfs-debuginfo-3.7.1-3.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-3.el6rhs.x86_64
glusterfs-server-3.7.1-3.el6rhs.x86_64
glusterfs-rdma-3.7.1-3.el6rhs.x86_64
[root@rhsqa14-vm4 ~]# 




hence moving to verified

Comment 4 errata-xmlrpc 2015-07-29 04:51:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.