This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 1264913 - Data Tiering:Tiering deamon needs to be resilient when detach tier fails(tier vol should work as expected when detach tier fails)
Data Tiering:Tiering deamon needs to be resilient when detach tier fails(tier...
Status: CLOSED EOL
Product: GlusterFS
Classification: Community
Component: tiering (Show other bugs)
3.7.4
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: Dan Lambright
bugs@gluster.org
: Triaged
Depends On:
Blocks: 1278356 1276742
  Show dependency treegraph
 
Reported: 2015-09-21 08:47 EDT by nchilaka
Modified: 2017-03-08 05:59 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1278356 (view as bug list)
Environment:
Last Closed: 2017-03-08 05:59:59 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description nchilaka 2015-09-21 08:47:57 EDT
Description of problem:
=====================
When a detach tier fails, the state of the volume is stuck there itself.
That means any new file creates are going to cold tier instead of going to hot tier.But as the detach failed, we must be doing writes to the hot tier and not cold tier.
The user needs to issue a detach tier stop to route the IOs back to hot tier

[root@zod ~]# gluster v detach-tier uzbek start
volume detach-tier start: failed: Commit failed on yarrow. Please check log file for details.
Tier command failed
[root@zod ~]# gluster v detach-tier uzbek status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes             0             1             0               failed               0.00
[root@zod ~]# gluster v detach-tier uzbek status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes             0             1             0               failed               0.00
[root@zod ~]# gluster v tier  uzbek status
Node                 Promoted files       Demoted files        Status              
---------            ---------            ---------            ---------           
localhost            0                    0                    failed              
volume rebalance: uzbek: success: 



Version-Release number of selected component (if applicable):
===========================================================
[root@zod uzbek]# rpm -qa|grep gluster
glusterfs-3.7.4-0.43.gitf139283.el7.centos.x86_64
glusterfs-fuse-3.7.4-0.43.gitf139283.el7.centos.x86_64
glglusterfs-debuginfo-3.7.4-0.33.git1d02d4b.el7.centos.x86_64
glusterfs-api-3.7.4-0.43.gitf139283.el7.centos.x86_64
ustglusterfs-client-xlators-3.7.4-0.43.gitf139283.el7.centos.x86_64
glusterfs-server-3.7.4-0.43.gitf139283.el7.centos.x86_64
erglusterfs-cli-3.7.4-0.43.gitf139283.el7.centos.x86_64
 glusterfs-libs-3.7.4-0.43.gitf139283.el7.centos.x86_64
-[root@zod uzbek]# gluster --version
glusterfs 3.7.4 built on Sep 19 2015 01:30:43
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@zod uzbek]# 



Steps to Reproduce:
-=====================
1.have a two node setup and create a tier volume with bricks spread across nodes
2.Now have files in hot tier and issue a detach tier start from node@#1
3.immediatly, from node#2 kill the glusterd
4. It can be seen that the detach tier fails immediatly with "failed: Commit failed on yarrow. Please check log file for details."

5. Now do new file creates from mount. It can be seen that the IOs are going to cold tier. It must go to hot tier as the detach tier failed at beginning itselg

6. Also the user cant redo a detach tier start. He first needs to stop the existing failed detach tier and then reissue

Actual results:


Expected results:


Additional info:
Comment 2 Kaushal 2017-03-08 05:59:59 EST
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.

Note You need to log in before you can comment on or make changes to this bug.