Bug 1278413

Summary: Data Tiering: AFR(replica) self-heal deamon details go missing on attach-tier
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Nag Pavan Chilakam <nchilaka>
Component: tierAssignee: Mohammed Rafi KC <rkavunga>
Status: CLOSED DUPLICATE QA Contact: Nag Pavan Chilakam <nchilaka>
Severity: urgent Docs Contact:
Priority: urgent    
Version: rhgs-3.1CC: bugs, dlambrig, josferna, rhs-bugs, storage-qa-internal, vagarwal
Target Milestone: ---Keywords: Reopened, Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1212830 Environment:
Last Closed: 2015-11-06 11:37:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1212830    
Bug Blocks: 1186580, 1199352, 1260923    

Description Nag Pavan Chilakam 2015-11-05 12:20:35 UTC
+++ This bug was initially created as a clone of Bug #1212830 +++

Description of problem:
======================
When we create a dist-rep volume and check the status, the volume details shows self heal deamon of afr as below:
Self-heal Daemon on 10.70.34.56             N/A       N/A        Y       24601

But on attach-tier this deamon process fails to show up on vol status


Version-Release number of selected component (if applicable):
============================================================
[root@interstellar ~]# gluster --version
glusterfs 3.7dev built on Apr 13 2015 07:14:27
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@interstellar ~]# rpm -qa|grep gluster
glusterfs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-resource-agents-3.7dev-0.994.gitf522001.el6.noarch
glusterfs-cli-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-fuse-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-extra-xlators-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-rdma-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-debuginfo-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-libs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-geo-replication-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-server-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-regression-tests-3.7dev-0.994.gitf522001.el6.x86_64


Steps to Reproduce:
===================
1.created a 3x dist-rep volume
2.start and issue status of volume
[root@interstellar ~]# gluster v status rep3
Status of volume: rep3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick ninja:/rhs/brick1/rep3a               49234     0          Y       14452
Brick interstellar:/rhs/brick1/rep3a        49187     0          Y       60206
Brick transformers:/rhs/brick1/rep3a        49172     0          Y       14930
Brick interstellar:/rhs/brick1/rep3b        49188     0          Y       60223
Brick ninja:/rhs/brick1/rep3b               49235     0          Y       14471
Brick transformers:/rhs/brick1/rep3b        49173     0          Y       14948
NFS Server on localhost                     N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       62245
NFS Server on 10.70.34.56                   N/A       N/A        N       N/A  
Self-heal Daemon on 10.70.34.56             N/A       N/A        Y       24601
NFS Server on ninja                         N/A       N/A        N       N/A  
Self-heal Daemon on ninja                   N/A       N/A        Y       14898
NFS Server on transformers                  N/A       N/A        N       N/A  
Self-heal Daemon on transformers            N/A       N/A        Y       15357
 
Task Status of Volume rep3
------------------------------------------------------------------------------
There are no active volume tasks
 

3. now attach a tier and reissue the command, it can be seen that the self deamons are not showing up
[root@interstellar ~]# gluster v attach-tier rep3 replica 3 ninja:/rhs/brick1/rep3-tier interstellar:/rhs/brick1/rep3-tier transformers:/rhs/brick1/rep3-tier 
volume add-brick: success
[root@interstellar ~]# gluster v status rep3
Status of volume: rep3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick transformers:/rhs/brick1/rep3-tier    49175     0          Y       15496
Brick interstellar:/rhs/brick1/rep3-tier    49190     0          Y       62447
Brick ninja:/rhs/brick1/rep3-tier           49237     0          Y       15080
Brick ninja:/rhs/brick1/rep3a               49234     0          Y       14452
Brick interstellar:/rhs/brick1/rep3a        49187     0          Y       60206
Brick transformers:/rhs/brick1/rep3a        49172     0          Y       14930
Brick interstellar:/rhs/brick1/rep3b        49188     0          Y       60223
Brick ninja:/rhs/brick1/rep3b               49235     0          Y       14471
Brick transformers:/rhs/brick1/rep3b        49173     0          Y       14948
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on ninja                         N/A       N/A        N       N/A  
NFS Server on 10.70.34.56                   N/A       N/A        N       N/A  
NFS Server on transformers                  N/A       N/A        N       N/A  
 
Task Status of Volume rep3
------------------------------------------------------------------------------
There are no active volume tasks



For  more logs refer to bz#1212822

--- Additional comment from Dan Lambright on 2015-04-23 11:05:58 EDT ---

We do not support self healing with tiered volumes in V1. We will support it in the future. Marking as deferred.

--- Additional comment from nchilaka on 2015-11-05 07:19:28 EST ---

I think now we should be supporting this, right?