Bug 1271725 - Data Tiering: Disallow attach tier on a volume where any rebalance process is in progress to avoid deadlock(like remove brick commit pending etc)
Data Tiering: Disallow attach tier on a volume where any rebalance process is...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: tier (Show other bugs)
3.1
Unspecified Unspecified
urgent Severity urgent
: ---
: RHGS 3.1.2
Assigned To: Mohammed Rafi KC
nchilaka
: Triaged, ZStream
Depends On: 1258833
Blocks: 1260783 1260923 1261819
  Show dependency treegraph
 
Reported: 2015-10-14 10:37 EDT by Mohammed Rafi KC
Modified: 2016-09-17 11:34 EDT (History)
8 users (show)

See Also:
Fixed In Version: glusterfs-3.7.5-0.3
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1258833
Environment:
Last Closed: 2016-03-01 00:39:27 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Mohammed Rafi KC 2015-10-14 10:37:20 EDT
+++ This bug was initially created as a clone of Bug #1258833 +++

Description of problem:
=====================
When attaching a tier make a check to see if any rebalance operations are pending.
For example, I had a remove-brick operation completed, but commit was not yet done.
Now I was able to attach tier.
Here There is a deadlock created as the tier deamon doesnt start by itself on attach tier as the remove brick is not commited, nor can i do a commit of remove-brick as it is a tier volume.

So, make sure you add a check before going ahead of attach-tier




Version-Release number of selected component (if applicable):
=============================================================
[root@nag-manual-node1 glusterfs]# gluster --version
glusterfs 3.7.3 built on Aug 27 2015 01:23:05
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@nag-manual-node1 glusterfs]# rpm -qa|grep gluster
glusterfs-libs-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-fuse-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-server-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-api-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-cli-3.7.3-0.82.git6c4096f.el6.x86_64
python-gluster-3.7.3-0.82.git6c4096f.el6.noarch
glusterfs-client-xlators-3.7.3-0.82.git6c4096f.el6.x86_64


How reproducible:
====================
very easily


Steps to Reproduce:
===================
1.create a distribute vol with say 4 bricks
2.now issue a remove brick and wait for it to complete
3.Now without commiting the remove brick, go ahead and attach tier
4. Now due to this the tier deamon doesnt trigger as commit is pending
Nor can i commit the remove brick due to it being a tier vol. Hence deadlock





Expected results:
===================
disallow attach tier if there are any rebalance operations are pending.




CLI LOG:
=======

[root@nag-manual-node1 glusterfs]# gluster v create rebal 10.70.46.84:/rhs/brick1/rebal 10.70.46.36:/rhs/brick1/rebal 10.70.46.36:/rhs/brick2/rebal
volume create: rebal: success: please start the volume to access data

[root@nag-manual-node1 glusterfs]# gluster v start rebal
volume start: rebal: success
[root@nag-manual-node1 glusterfs]# gluster v info rebal
 
Volume Name: rebal
Type: Distribute
Volume ID: 3e272970-b319-4a35-a8cd-6845190761ee
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.46.84:/rhs/brick1/rebal
Brick2: 10.70.46.36:/rhs/brick1/rebal
Brick3: 10.70.46.36:/rhs/brick2/rebal
Options Reconfigured:
performance.readdir-ahead: on

[root@nag-manual-node1 glusterfs]# gluster v remove-brick rebal 10.70.46.36:/rhs/brick2/rebal start
volume remove-brick start: success
ID: 464ee968-e3a4-41f0-89f7-6d6ec4ea1a62
[root@nag-manual-node1 glusterfs]# gluster v remove-brick rebal 10.70.46.36:/rhs/brick2/rebal status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                             10.70.46.36                0        0Bytes             0             0             0            completed               0.00
[root@nag-manual-node1 glusterfs]# gluster v status rebal
Status of volume: rebal
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.46.84:/rhs/brick1/rebal         49187     0          Y       7849 
Brick 10.70.46.36:/rhs/brick1/rebal         49186     0          Y       32414
Brick 10.70.46.36:/rhs/brick2/rebal         49187     0          Y       32432
NFS Server on localhost                     2049      0          Y       7972 
NFS Server on 10.70.46.36                   2049      0          Y       32452
 
Task Status of Volume rebal
------------------------------------------------------------------------------
Task                 : Remove brick        
ID                   : 464ee968-e3a4-41f0-89f7-6d6ec4ea1a62
Removed bricks:     
10.70.46.36:/rhs/brick2/rebal
Status               : completed           
 


[root@nag-manual-node1 glusterfs]# gluster v info rebal
 
Volume Name: rebal
Type: Distribute
Volume ID: 3e272970-b319-4a35-a8cd-6845190761ee
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.46.84:/rhs/brick1/rebal
Brick2: 10.70.46.36:/rhs/brick1/rebal
Brick3: 10.70.46.36:/rhs/brick2/rebal
Options Reconfigured:
performance.readdir-ahead: on
[root@nag-manual-node1 glusterfs]# gluster v attach-tier rebal 10.70.46.84:/rhs/brick4/rebalhot 10.70.46.36:/rhs/brick4/rebalhot
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: success
volume rebalance: rebal: failed: A remove-brick task on volume rebal is not yet committed. Either commit or stop the remove-brick task.
Failed to run tier start. Please execute tier start command explictly
Usage : gluster volume rebalance <volname> tier start
[root@nag-manual-node1 glusterfs]# gluster v info rebal
 
Volume Name: rebal
Type: Tier
Volume ID: 3e272970-b319-4a35-a8cd-6845190761ee
Status: Started
Number of Bricks: 5
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.36:/rhs/brick4/rebalhot
Brick2: 10.70.46.84:/rhs/brick4/rebalhot
Cold Tier:
Cold Tier Type : Distribute
Number of Bricks: 3
Brick3: 10.70.46.84:/rhs/brick1/rebal
Brick4: 10.70.46.36:/rhs/brick1/rebal
Brick5: 10.70.46.36:/rhs/brick2/rebal
Options Reconfigured:
performance.readdir-ahead: on
[root@nag-manual-node1 glusterfs]# gluster  v status rebal
Status of volume: rebal
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.46.36:/rhs/brick4/rebalhot      49188     0          Y       32571
Brick 10.70.46.84:/rhs/brick4/rebalhot      49188     0          Y       8027 
Cold Bricks:
Brick 10.70.46.84:/rhs/brick1/rebal         49187     0          Y       7849 
Brick 10.70.46.36:/rhs/brick1/rebal         49186     0          Y       32414
Brick 10.70.46.36:/rhs/brick2/rebal         49187     0          Y       32432
NFS Server on localhost                     2049      0          Y       8047 
NFS Server on 10.70.46.36                   2049      0          Y       32590
 
Task Status of Volume rebal
------------------------------------------------------------------------------
Task                 : Remove brick        
ID                   : 464ee968-e3a4-41f0-89f7-6d6ec4ea1a62
Removed bricks:     
10.70.46.36:/rhs/brick2/rebal
Status               : completed           
 
[root@nag-manual-node1 glusterfs]# gluster v rebal rebal status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                             10.70.46.36                0        0Bytes             0             0             0            completed               0.00
volume rebalance: rebal: success: 
[root@nag-manual-node1 glusterfs]# gluster v rebal rebal tier status
Node                 Promoted files       Demoted files        Status              
---------            ---------            ---------            ---------           
localhost            0                    0                    not started         
10.70.46.36          0                    0                    completed           
root@nag-manual-node1 glusterfs]# gluster v rebalance rebal tier start
volume rebalance: rebal: failed: A remove-brick task on volume rebal is not yet committed. Either commit or stop the remove-brick task.
[root@nag-manual-node1 glusterfs]# gluster v rebalance rebal tier status

[root@nag-manual-node1 glusterfs]# gluster v remove-brick rebal 10.70.46.36:/rhs/brick2/rebal commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: failed: Removing brick from a Tier volume is not allowed

--- Additional comment from nchilaka on 2015-09-01 08:07:20 EDT ---

Workaround:
==========
>do a detach tier commit forcefully
>do a remove brick commit forcefully(though the remove brick operation doesnt show up anymore in the vol status or rebalance status
>reattach the tier


[root@nag-manual-node1 glusterfs]# gluster v detach-tier rebal commit
volume detach-tier commit: failed: Brick 10.70.46.84:/rhs/brick4/rebalhot is not decommissioned. Use start or force option
[root@nag-manual-node1 glusterfs]# gluster v detach-tier rebal commit force
volume detach-tier commit force: success
[root@nag-manual-node1 glusterfs]# gluster v info rebal
 
Volume Name: rebal
Type: Distribute
Volume ID: 3e272970-b319-4a35-a8cd-6845190761ee
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.46.84:/rhs/brick1/rebal
Brick2: 10.70.46.36:/rhs/brick1/rebal
Brick3: 10.70.46.36:/rhs/brick2/rebal
Options Reconfigured:
performance.readdir-ahead: on
[root@nag-manual-node1 glusterfs]# gluster  v status rebal
Status of volume: rebal
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.46.84:/rhs/brick1/rebal         49187     0          Y       7849 
Brick 10.70.46.36:/rhs/brick1/rebal         49186     0          Y       32414
Brick 10.70.46.36:/rhs/brick2/rebal         49187     0          Y       32432
NFS Server on localhost                     2049      0          Y       8455 
NFS Server on 10.70.46.36                   2049      0          Y       402  
 
Task Status of Volume rebal
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@nag-manual-node1 glusterfs]# gluster v rebal rebal status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
volume rebalance: rebal: success: 
[root@nag-manual-node1 glusterfs]# gluster v remove-brick rebal 10.70.46.36:/rhs/brick2/rebal commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.

--- Additional comment from Mohammed Rafi KC on 2015-09-10 04:50:15 EDT ---

Nag,
Thanks for catching this bug. Good work
Comment 3 surabhi 2015-11-25 04:41:00 EST
Following steps used to verify the BZ:

1.created a distribute volume with 3 bricks
2.execute remove brick to remove one of the bricks .
3.Now without commiting the remove brick, go ahead and attach tier

Expected result:
Attach tier should should not be allowed if rebalance/remove brick is in progress.

Actual result:
Attach tier failed when remove-brick is not committed/rebalance is in progress.

Volume Name: distribute
Type: Distribute
Volume ID: 8d918c68-893d-4173-8a09-1e0baef90b7f
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.46.111:/rhs/brick3/b1
Brick2: 10.70.46.90:/rhs/brick3/b2
Brick3: 10.70.46.136:/bricks/brick3/b3
Options Reconfigured:
performance.readdir-ahead: on
[root@localhost ~]# gluster vol remove-brick distribute 10.70.46.136:/bricks/brick3/b3 start
volume remove-brick start: success
ID: ea216473-7dba-4439-94cf-734f13891f55
[root@localhost ~]# gluster vol remove-brick distribute 10.70.46.136:/bricks/brick3/b3 status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                            10.70.46.136                0        0Bytes             0             0             0            completed               0.00

[root@localhost ~]# gluster vol attach-tier distribute 10.70.46.111:/rhs/brick4/hot1 10.70.46.90:/rhs/brick4/hot2
volume attach-tier: failed: An earlier remove-brick task exists for volume distribute. Either commit it or stop it before attaching a tier.
Tier command failed

[root@localhost ~]# gluster vol remove-brick distribute 10.70.46.136:/bricks/brick3/b3 commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. 

[root@localhost ~]# gluster vol attach-tier distribute 10.70.46.111:/rhs/brick4/hot1 10.70.46.90:/rhs/brick4/hot2
volume attach-tier: success
Tiering Migration Functionality: distribute: success: Attach tier is successful on distribute. use tier status to check the status.
ID: abd37426-ea0f-49c4-b502-833568162eb5



marking the BZ as verified on build :
rpm -qa | grep glusterfs
glusterfs-3.7.5-7.el7rhgs.x86_64
glusterfs-api-3.7.5-7.el7rhgs.x86_64
glusterfs-server-3.7.5-7.el7rhgs.x86_64
glusterfs-rdma-3.7.5-7.el7rhgs.x86_64
glusterfs-fuse-3.7.5-7.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-7.el7rhgs.x86_64
samba-vfs-glusterfs-4.2.4-6.el7rhgs.x86_64
glusterfs-libs-3.7.5-7.el7rhgs.x86_64
glusterfs-client-xlators-3.7.5-7.el7rhgs.x86_64
glusterfs-cli-3.7.5-7.el7rhgs.x86_64
Comment 5 errata-xmlrpc 2016-03-01 00:39:27 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html

Note You need to log in before you can comment on or make changes to this bug.