Bug 1229257 - Incorrect vol info post detach on disperse volume
Summary: Incorrect vol info post detach on disperse volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
urgent
medium
Target Milestone: ---
: RHGS 3.1.0
Assignee: Bug Updates Notification Mailing List
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On: 1214387
Blocks: 1202842
TreeView+ depends on / blocked
 
Reported: 2015-06-08 10:36 UTC by Nag Pavan Chilakam
Modified: 2016-09-17 15:37 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1214387
Environment:
Last Closed: 2015-07-29 04:59:00 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Nag Pavan Chilakam 2015-06-08 10:36:35 UTC
+++ This bug was initially created as a clone of Bug #1214387 +++

Description of problem:
Detach tier from a disperse  volume (cold) is giving messing up the disperse vol info

Version-Release number of selected component (if applicable):
glusterfs-server-3.7dev-0.994.git0d36d4f.el6.x86_64

How reproducible:


Steps to Reproduce:
1. Create a simple disperse volume [ 1 x (4+2)]
2. Attach a tier (replica) to the volume
3. Now, detach the tier. 
4. This is what the vol info shows post the detach:
[root@dhcp35-56 ~]# gluster vol info
 
Volume Name: vol2
Type: Distributed-Disperse
Volume ID: b267ba15-82df-4842-bd41-ac233592d5ba
Status: Created
Number of Bricks: 3 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.35.56:/rhs/brick1
Brick2: 10.70.35.67:/rhs/brick1
Brick3: 10.70.35.56:/rhs/brick2
Brick4: 10.70.35.67:/rhs/brick2
Brick5: 10.70.35.56:/rhs/brick3
Brick6: 10.70.35.67:/rhs/brick3

The volume now is shown up as "Distributed-Disperse" and the distribute count is for some reason 3.

--- Additional comment from Dan Lambright on 2015-04-22 13:13:39 EDT ---

I believe this was called out and fixed in a separate bug. Assigning to Rafi to see..

--- Additional comment from Mohammed Rafi KC on 2015-04-23 01:25:51 EDT ---

upstream patch : http://review.gluster.org/#/c/10339/

--- Additional comment from Niels de Vos on 2015-05-15 09:07:25 EDT ---

This change should not be in "ON_QA", the patch posted for this bug is only available in the master branch and not in a release yet. Moving back to MODIFIED until there is an beta release for the next GlusterFS version.

Comment 4 Triveni Rao 2015-06-18 10:30:57 UTC
[root@rhsqa14-vm3 ~]# gluster v create ecvol disperse 6 redundancy 2 10.70.47.159:/rhs/brick1/e0 10.70.46.2:/rhs/brick1/e0 10.70.47.159:/rhs/brick2/e0 10.70.46.2:/rhs/brick2/e0 10.70.47.159:/rhs/brick3/e0 10.70.46.2:/rhs/brick3/e0 force
volume create: ecvol: success: please start the volume to access data
[root@rhsqa14-vm3 ~]# gluster v info
 
Volume Name: ecvol
Type: Disperse
Volume ID: 140cf106-24e8-4c0b-8b87-75c4d361fdca
Status: Created
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/e0
Brick2: 10.70.46.2:/rhs/brick1/e0
Brick3: 10.70.47.159:/rhs/brick2/e0
Brick4: 10.70.46.2:/rhs/brick2/e0
Brick5: 10.70.47.159:/rhs/brick3/e0
Brick6: 10.70.46.2:/rhs/brick3/e0
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm3 ~]# 


[root@rhsqa14-vm3 ~]# gluster v attach-tier ecvol replica 2 10.70.47.159:/rhs/brick4/e0 10.70.46.2:/rhs/brick4/e0
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: success
volume rebalance: ecvol: failed: Volume ecvol needs to be started to perform rebalance
Failed to run tier start. Please execute tier start command explictly
Usage : gluster volume rebalance <volname> tier start
[root@rhsqa14-vm3 ~]# gluster v start ecvol
volume start: ecvol: success
[root@rhsqa14-vm3 ~]# gluster v attach-tier ecvol replica 2 10.70.47.159:/rhs/brick4/e0 10.70.46.2:/rhs/brick4/e0
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: failed: Volume ecvol is already a tier.
[root@rhsqa14-vm3 ~]# gluster v attach-tier ecvol replica 2 10.70.47.159:/rhs/brick4/e0 10.70.46.2:/rhs/brick4/e0 force
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: failed: Volume ecvol is already a tier.
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# gluster v info
 
Volume Name: ecvol
Type: Tier
Volume ID: 140cf106-24e8-4c0b-8b87-75c4d361fdca
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Replicate
Number of Bricks: 1 x 2 = 2
Brick1: 10.70.46.2:/rhs/brick4/e0
Brick2: 10.70.47.159:/rhs/brick4/e0
Cold Tier:
Cold Tier Type : Disperse
Number of Bricks: 1 x (4 + 2) = 6
Brick3: 10.70.47.159:/rhs/brick1/e0
Brick4: 10.70.46.2:/rhs/brick1/e0
Brick5: 10.70.47.159:/rhs/brick2/e0
Brick6: 10.70.46.2:/rhs/brick2/e0
Brick7: 10.70.47.159:/rhs/brick3/e0
Brick8: 10.70.46.2:/rhs/brick3/e0
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm3 ~]# 


root@rhsqa14-vm3 ~]# gluster volume rebalance ecvol tier start
volume rebalance: ecvol: success: Rebalance on ecvol has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: d72d6a3b-b9b2-479c-b961-0bf500a98588

[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# gluster volume rebalance ecvol tier status
Node                 Promoted files       Demoted files        Status              
---------            ---------            ---------            ---------           
localhost            0                    0                    in progress         
10.70.46.2           0                    0                    in progress         
volume rebalance: ecvol: success: 
[root@rhsqa14-vm3 ~]# 


[root@rhsqa14-vm3 ~]# gluster v detach-tier ecvol start
volume detach-tier start: success
ID: 1a9a6afa-f81a-4372-b3d0-ccc43c874661
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# gluster v detach-tier ecvol commit
volume detach-tier commit: success
Check the detached bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. 
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# gluster v info
 
Volume Name: ecvol
Type: Disperse
Volume ID: 140cf106-24e8-4c0b-8b87-75c4d361fdca
Status: Started
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/e0
Brick2: 10.70.46.2:/rhs/brick1/e0
Brick3: 10.70.47.159:/rhs/brick2/e0
Brick4: 10.70.46.2:/rhs/brick2/e0
Brick5: 10.70.47.159:/rhs/brick3/e0
Brick6: 10.70.46.2:/rhs/brick3/e0
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm3 ~]# 


[root@rhsqa14-vm3 ~]# rpm -qa | grep gluster
glusterfs-3.7.1-3.el6rhs.x86_64
glusterfs-cli-3.7.1-3.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-3.el6rhs.x86_64
glusterfs-libs-3.7.1-3.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-3.el6rhs.x86_64
glusterfs-fuse-3.7.1-3.el6rhs.x86_64
glusterfs-server-3.7.1-3.el6rhs.x86_64
glusterfs-rdma-3.7.1-3.el6rhs.x86_64
glusterfs-api-3.7.1-3.el6rhs.x86_64
glusterfs-debuginfo-3.7.1-3.el6rhs.x86_64
[root@rhsqa14-vm3 ~]# 

NOTE:

vol info shows properly after detach tier. but when IO was running on the vol and detach tier executed then IO failed and exited on mount point.
we will put the status as verified but for IO failure will open a separate bug.

Comment 5 errata-xmlrpc 2015-07-29 04:59:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.