Bug 1212830

Summary: Data Tiering: AFR(replica) self-heal deamon details go missing on attach-tier
Product: [Community] GlusterFS Reporter: Nag Pavan Chilakam <nchilaka>
Component: tieringAssignee: sankarshan <sankarshan>
Status: CLOSED WONTFIX QA Contact: bugs <bugs>
Severity: urgent Docs Contact:
Priority: urgent    
Version: mainlineCC: bugs, sankarshan
Target Milestone: ---Keywords: Reopened, Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1278413 (view as bug list) Environment:
Last Closed: 2018-11-02 08:15:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1186580, 1199352, 1278413    

Description Nag Pavan Chilakam 2015-04-17 12:37:37 UTC
Description of problem:
======================
When we create a dist-rep volume and check the status, the volume details shows self heal deamon of afr as below:
Self-heal Daemon on 10.70.34.56             N/A       N/A        Y       24601

But on attach-tier this deamon process fails to show up on vol status


Version-Release number of selected component (if applicable):
============================================================
[root@interstellar ~]# gluster --version
glusterfs 3.7dev built on Apr 13 2015 07:14:27
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@interstellar ~]# rpm -qa|grep gluster
glusterfs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-resource-agents-3.7dev-0.994.gitf522001.el6.noarch
glusterfs-cli-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-fuse-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-extra-xlators-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-rdma-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-debuginfo-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-libs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-geo-replication-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-server-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-regression-tests-3.7dev-0.994.gitf522001.el6.x86_64


Steps to Reproduce:
===================
1.created a 3x dist-rep volume
2.start and issue status of volume
[root@interstellar ~]# gluster v status rep3
Status of volume: rep3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick ninja:/rhs/brick1/rep3a               49234     0          Y       14452
Brick interstellar:/rhs/brick1/rep3a        49187     0          Y       60206
Brick transformers:/rhs/brick1/rep3a        49172     0          Y       14930
Brick interstellar:/rhs/brick1/rep3b        49188     0          Y       60223
Brick ninja:/rhs/brick1/rep3b               49235     0          Y       14471
Brick transformers:/rhs/brick1/rep3b        49173     0          Y       14948
NFS Server on localhost                     N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       62245
NFS Server on 10.70.34.56                   N/A       N/A        N       N/A  
Self-heal Daemon on 10.70.34.56             N/A       N/A        Y       24601
NFS Server on ninja                         N/A       N/A        N       N/A  
Self-heal Daemon on ninja                   N/A       N/A        Y       14898
NFS Server on transformers                  N/A       N/A        N       N/A  
Self-heal Daemon on transformers            N/A       N/A        Y       15357
 
Task Status of Volume rep3
------------------------------------------------------------------------------
There are no active volume tasks
 

3. now attach a tier and reissue the command, it can be seen that the self deamons are not showing up
[root@interstellar ~]# gluster v attach-tier rep3 replica 3 ninja:/rhs/brick1/rep3-tier interstellar:/rhs/brick1/rep3-tier transformers:/rhs/brick1/rep3-tier 
volume add-brick: success
[root@interstellar ~]# gluster v status rep3
Status of volume: rep3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick transformers:/rhs/brick1/rep3-tier    49175     0          Y       15496
Brick interstellar:/rhs/brick1/rep3-tier    49190     0          Y       62447
Brick ninja:/rhs/brick1/rep3-tier           49237     0          Y       15080
Brick ninja:/rhs/brick1/rep3a               49234     0          Y       14452
Brick interstellar:/rhs/brick1/rep3a        49187     0          Y       60206
Brick transformers:/rhs/brick1/rep3a        49172     0          Y       14930
Brick interstellar:/rhs/brick1/rep3b        49188     0          Y       60223
Brick ninja:/rhs/brick1/rep3b               49235     0          Y       14471
Brick transformers:/rhs/brick1/rep3b        49173     0          Y       14948
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on ninja                         N/A       N/A        N       N/A  
NFS Server on 10.70.34.56                   N/A       N/A        N       N/A  
NFS Server on transformers                  N/A       N/A        N       N/A  
 
Task Status of Volume rep3
------------------------------------------------------------------------------
There are no active volume tasks



For  more logs refer to bz#1212822

Comment 1 Dan Lambright 2015-04-23 15:05:58 UTC
We do not support self healing with tiered volumes in V1. We will support it in the future. Marking as deferred.

Comment 2 Nag Pavan Chilakam 2015-11-05 12:19:28 UTC
I think now we should be supporting this, right?

Comment 4 Amar Tumballi 2018-11-02 08:15:15 UTC
Patch https://review.gluster.org/#/c/glusterfs/+/21331/ removes tier functionality from GlusterFS. 

https://bugzilla.redhat.com/show_bug.cgi?id=1642807 is used as the tracking bug for this. Recommendation is to convert your tier volume to regular volume (either replicate, ec, or plain distribute) with "tier detach" command before upgrade, and use backend features like dm-cache etc to utilize the caching from backend to provide better performance and functionality.