Bug 1215152 - [Data Tiering] : Attaching a replica 2 hot tier to a replica 3 volume changes the volume topology to nx2 - causing inconsistent data between bricks in the replica set
Summary: [Data Tiering] : Attaching a replica 2 hot tier to a replica 3 volume changes...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: mainline
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
Assignee: Mohammed Rafi KC
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks: qe_tracker_everglades 1224239 1260923
TreeView+ depends on / blocked
 
Reported: 2015-04-24 12:31 UTC by senaik
Modified: 2018-10-08 09:53 UTC (History)
3 users (show)

Fixed In Version: glusterfs-4.1.4
Clone Of:
: 1224239 (view as bug list)
Environment:
Last Closed: 2018-10-08 09:53:43 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description senaik 2015-04-24 12:31:10 UTC
Description of problem:
========================
Attaching a replica 2 volume to a replica 3 dist-rep volume changes the volume type to nx2 dist-rep volume thus leaving the bricks in the replica set with inconsistent data 

Version-Release number of selected component (if applicable):
============================================================
 gluster --version
glusterfs 3.7dev built on Apr 17 2015 09:57:40

How reproducible:
=================
always

Steps to Reproduce:
===================
1. Create a 6x3 dist rep volume 

[root@rhs-arch-srv3 ~]# gluster v i vol0
 
Volume Name: vol0
Type: Distributed-Replicate
Volume ID: bec40726-8b20-4869-8719-e74906f749e1
Status: Started
Number of Bricks: 6 x 3 = 18
Transport-type: tcp
Bricks:
Brick1: inception.lab.eng.blr.redhat.com:/rhs/brick1/b1
Brick2: rhs-arch-srv2.lab.eng.blr.redhat.com:/rhs/brick1/b1
Brick3: rhs-arch-srv3.lab.eng.blr.redhat.com:/rhs/brick1/b1
Brick4: rhs-arch-srv4.lab.eng.blr.redhat.com:/rhs/brick1/b1
Brick5: inception.lab.eng.blr.redhat.com:/rhs/brick2/b2
Brick6: rhs-arch-srv2.lab.eng.blr.redhat.com:/rhs/brick2/b2
Brick7: rhs-arch-srv3.lab.eng.blr.redhat.com:/rhs/brick2/b2
Brick8: rhs-arch-srv4.lab.eng.blr.redhat.com:/rhs/brick2/b2
Brick9: inception.lab.eng.blr.redhat.com:/rhs/brick3/b3
Brick10: rhs-arch-srv2.lab.eng.blr.redhat.com:/rhs/brick3/b3
Brick11: rhs-arch-srv3.lab.eng.blr.redhat.com:/rhs/brick3/b3
Brick12: rhs-arch-srv4.lab.eng.blr.redhat.com:/rhs/brick3/b3
Brick13: inception.lab.eng.blr.redhat.com:/rhs/brick4/b4
Brick14: rhs-arch-srv2.lab.eng.blr.redhat.com:/rhs/brick4/b4
Brick15: rhs-arch-srv3.lab.eng.blr.redhat.com:/rhs/brick4/b4
Brick16: rhs-arch-srv4.lab.eng.blr.redhat.com:/rhs/brick4/b4
Brick17: inception.lab.eng.blr.redhat.com:/rhs/brick5/b5
Brick18: rhs-arch-srv2.lab.eng.blr.redhat.com:/rhs/brick5/b5
Options Reconfigured:
server.allow-insecure: on
features.quota: on
features.uss: enable

2. Create some IO 

3. Attach a replica 2 hot tier  
gluster v attach-tier vol0 replica 2 inception.lab.eng.blr.redhat.com:/rhs/brick7/b7 rhs-arch-srv2.lab.eng.blr.redhat.com:/rhs/brick7/b7

4.Check gluster v info - the volume type has changed from 6x3 to 10x2 dist rep volume

[root@rhs-arch-srv3 ~]# gluster v i vol0
 
Volume Name: vol0
Type: Tier
Volume ID: bec40726-8b20-4869-8719-e74906f749e1
Status: Started
Number of Bricks: 10 x 2 = 20
Transport-type: tcp
Bricks:
Brick1: rhs-arch-srv2.lab.eng.blr.redhat.com:/rhs/brick7/b7
Brick2: inception.lab.eng.blr.redhat.com:/rhs/brick7/b7
Brick3: inception.lab.eng.blr.redhat.com:/rhs/brick1/b1
Brick4: rhs-arch-srv2.lab.eng.blr.redhat.com:/rhs/brick1/b1
Brick5: rhs-arch-srv3.lab.eng.blr.redhat.com:/rhs/brick1/b1
Brick6: rhs-arch-srv4.lab.eng.blr.redhat.com:/rhs/brick1/b1
Brick7: inception.lab.eng.blr.redhat.com:/rhs/brick2/b2
Brick8: rhs-arch-srv2.lab.eng.blr.redhat.com:/rhs/brick2/b2
Brick9: rhs-arch-srv3.lab.eng.blr.redhat.com:/rhs/brick2/b2
Brick10: rhs-arch-srv4.lab.eng.blr.redhat.com:/rhs/brick2/b2
Brick11: inception.lab.eng.blr.redhat.com:/rhs/brick3/b3
Brick12: rhs-arch-srv2.lab.eng.blr.redhat.com:/rhs/brick3/b3
Brick13: rhs-arch-srv3.lab.eng.blr.redhat.com:/rhs/brick3/b3
Brick14: rhs-arch-srv4.lab.eng.blr.redhat.com:/rhs/brick3/b3
Brick15: inception.lab.eng.blr.redhat.com:/rhs/brick4/b4
Brick16: rhs-arch-srv2.lab.eng.blr.redhat.com:/rhs/brick4/b4
Brick17: rhs-arch-srv3.lab.eng.blr.redhat.com:/rhs/brick4/b4
Brick18: rhs-arch-srv4.lab.eng.blr.redhat.com:/rhs/brick4/b4
Brick19: inception.lab.eng.blr.redhat.com:/rhs/brick5/b5
Brick20: rhs-arch-srv2.lab.eng.blr.redhat.com:/rhs/brick5/b5
Options Reconfigured:
server.allow-insecure: on
features.quota: on
features.uss: enable


5.Check files in backend between earlier 3 brick replica set to the changed 2 brick replica set 

3 brick replica set before attaching tier 
=========================================
srv2:/rhs/brick3/b3:fuse2 fuse7 fuse9 nfs10 nfs2 nfs4 nfs9
srv3:/rhs/brick3/b3:fuse2 fuse7 fuse9 nfs10 nfs2 nfs4 nfs9
srv4:/rhs/brick3/b3:fuse2 fuse7 fuse9 nfs10 nfs2 nfs4 nfs9

2 brick replica set - each brick has different data 
===============================================
[root@inception ~]# ll /rhs/brick3/b3/
total 61464
-rw-r--r--. 2 root root 10485760 Apr 24 16:49 fuse1
-rw-r--r--. 2 root root 10485760 Apr 24 16:49 fuse3
-rw-r--r--. 2 root root 10485760 Apr 24 16:49 fuse4
-rw-r--r--. 2 root root 10485760 Apr 24 16:49 fuse5
-rw-r--r--. 2 root root 10485760 Apr 24 16:49 fuse6
-rw-r--r--. 2 root root 10485760 Apr 24 16:49 nfs8
You have new mail in /var/spool/mail/root

[root@rhs-arch-srv2 ~]# ll /rhs/brick3/b3
total 71708
-rw-r--r--. 2 root root 10485760 Apr 24 16:49 fuse2
-rw-r--r--. 2 root root 10485760 Apr 24 16:49 fuse7
-rw-r--r--. 2 root root 10485760 Apr 24 16:49 fuse9
-rw-r--r--. 2 root root 10485760 Apr 24 16:49 nfs10
-rw-r--r--. 2 root root 10485760 Apr 24 16:49 nfs2
-rw-r--r--. 2 root root 10485760 Apr 24 16:49 nfs4
-rw-r--r--. 2 root root 10485760 Apr 24 16:49 nfs9

Actual results:



Expected results:


Additional inf0

Comment 2 Niels de Vos 2015-05-15 13:07:47 UTC
This change should not be in "ON_QA", the patch posted for this bug is only available in the master branch and not in a release yet. Moving back to MODIFIED until there is an beta release for the next GlusterFS version.

Comment 4 Amar Tumballi 2018-10-08 09:53:43 UTC
This bug was ON_QA status, and on GlusterFS product in bugzilla, we don't have that as a valid status. We are closing it as 'CURRENT RELEASE ' to indicate the availability of the fix, please reopen if found again.


Note You need to log in before you can comment on or make changes to this bug.