Bug 1284387

Summary: Without detach tier commit, status changes back to tier migration
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Bhaskarakiran <byarlaga>
Component: tierAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED ERRATA QA Contact: Nag Pavan Chilakam <nchilaka>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: josferna, mzywusko, nchilaka, rcyriac, rhs-bugs, rkavunga, sankarshan, storage-qa-internal
Target Milestone: ---Keywords: Regression, Reopened, ZStream
Target Release: RHGS 3.1.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.7.5-11 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1286974 (view as bug list) Environment:
Last Closed: 2016-03-01 05:57:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1260783, 1286974, 1289898    

Description Bhaskarakiran 2015-11-23 08:45:19 UTC
Description of problem:
=======================

On a 8+4 EC volume attached a 2x2 dist-rep tier volume. Performed some IO and now tried to detach it. I ran detach tier start command. I left that for quite some time and didn't do a commit. The status changed back to "Tier Migration". All the IO are going to cold tier though which means detach tier is successful.
gluster v detach-tier status shows as not started.

[root@transformers ~]# gluster v detach-tier vol1 status
volume detach-tier status: failed: Detach-tiernot started
Tier command failed
[root@transformers ~]# 

[root@transformers ~]# gluster v status vol1
Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick ninja:/rhs/brick2/vol1-tier4          49165     0          Y       6948 
Brick vertigo:/rhs/brick2/vol1-tier3        49163     0          Y       5313 
Brick ninja:/rhs/brick1/vol1-tier2          49164     0          Y       6966 
Brick vertigo:/rhs/brick1/vol1-tier1        49162     0          Y       5331 
Cold Bricks:
Brick transformers:/rhs/brick1/b1           49175     0          Y       9032 
Brick interstellar:/rhs/brick1/b2           49173     0          Y       6351 
Brick transformers:/rhs/brick2/b3           49176     0          Y       43031
Brick interstellar:/rhs/brick2/b4           49174     0          Y       40088
Brick transformers:/rhs/brick3/b5           49177     0          Y       43049
Brick interstellar:/rhs/brick3/b6           49175     0          Y       40106
Brick transformers:/rhs/brick4/b7           49178     0          Y       43067
Brick interstellar:/rhs/brick4/b8           49176     0          Y       40124
Brick transformers:/rhs/brick5/b9           49179     0          Y       43085
Brick interstellar:/rhs/brick5/b10          49177     0          Y       40142
Brick transformers:/rhs/brick7/b11          49182     0          Y       43103
Brick interstellar:/rhs/brick6/b12          49178     0          Y       40160
Snapshot Daemon on localhost                49181     0          Y       43122
NFS Server on localhost                     2049      0          Y       13558
Self-heal Daemon on localhost               N/A       N/A        Y       13409
Quota Daemon on localhost                   N/A       N/A        Y       13507
Snapshot Daemon on ninja                    49156     0          Y       6986 
NFS Server on ninja                         2049      0          Y       8275 
Self-heal Daemon on ninja                   N/A       N/A        Y       8152 
Quota Daemon on ninja                       N/A       N/A        Y       8231 
Snapshot Daemon on vertigo                  49154     0          Y       5350 
NFS Server on vertigo                       2049      0          Y       14322
Self-heal Daemon on vertigo                 N/A       N/A        Y       14207
Quota Daemon on vertigo                     N/A       N/A        Y       14286
Snapshot Daemon on interstellar.lab.eng.blr
.redhat.com                                 49179     0          Y       40180
NFS Server on interstellar.lab.eng.blr.redh
at.com                                      2049      0          Y       10440
Self-heal Daemon on interstellar.lab.eng.bl
r.redhat.com                                N/A       N/A        Y       10314
Quota Daemon on interstellar.lab.eng.blr.re
dhat.com                                    N/A       N/A        Y       10393
 
Task Status of Volume vol1
------------------------------------------------------------------------------
Task                 : Tier migration      
ID                   : 01e8b478-633d-4d8a-8785-f11f936f910d
Status               : in progress         
 
[root@transformers ~]# 


Version-Release number of selected component (if applicable):
=============================================================
3.7.5-6

How reproducible:
=================
Seen couple of times

Steps to Reproduce:
===================
As in description. 

Actual results:
===============
Detach tier status getting changed to Tier migration

Expected results:
=================
Detach tier should be successful

Additional info:
================
sosreports will be copied to rhsqe

Comment 2 Bhaskarakiran 2015-11-27 05:47:14 UTC
Could not reproduce this bug. Closing for now.

Comment 3 Bhaskarakiran 2015-11-30 11:13:12 UTC
Re-opening this bug:
====================

Ideally volume restart should bring up the processes which are down. I was under this impression and didn't look at it for this bug. I am able to reproduce. Here are the steps :

1. EC volume (8+4) and 2x2 dist-rep tier volume
2. Start IO and let it run for some time so as to fill the tier volume
3. Do detach tier start  and check the status. Let some of the files to be demoted. check with rebalance command.
4. Now do a volume start force and see the volume status

The task which is "Detach tier" changes to "Tier migration". This should not be the behaviour when the volume is restarted.

Comment 5 Mohammed Rafi KC 2015-12-01 09:30:23 UTC
upstream bug : http://review.gluster.org/12833

Comment 6 Mohammed Rafi KC 2015-12-10 09:22:03 UTC
downstream patch : https://code.engineering.redhat.com/gerrit/63464

Comment 7 Bhaskarakiran 2015-12-15 08:42:25 UTC
verified on 3.7.5-11 and it works. Marking this as fixed.

Comment 10 errata-xmlrpc 2016-03-01 05:57:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html