Bug 1278413 - Data Tiering: AFR(replica) self-heal deamon details go missing on attach-tier
Data Tiering: AFR(replica) self-heal deamon details go missing on attach-tier
Status: CLOSED DUPLICATE of bug 1231144
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: tier (Show other bugs)
3.1
Unspecified Unspecified
urgent Severity urgent
: ---
: ---
Assigned To: Mohammed Rafi KC
nchilaka
: Reopened, Triaged
Depends On: 1212830
Blocks: qe_tracker_everglades glusterfs-3.7.0 1260923
  Show dependency treegraph
 
Reported: 2015-11-05 07:20 EST by nchilaka
Modified: 2016-09-17 11:39 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1212830
Environment:
Last Closed: 2015-11-06 06:37:21 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description nchilaka 2015-11-05 07:20:35 EST
+++ This bug was initially created as a clone of Bug #1212830 +++

Description of problem:
======================
When we create a dist-rep volume and check the status, the volume details shows self heal deamon of afr as below:
Self-heal Daemon on 10.70.34.56             N/A       N/A        Y       24601

But on attach-tier this deamon process fails to show up on vol status


Version-Release number of selected component (if applicable):
============================================================
[root@interstellar ~]# gluster --version
glusterfs 3.7dev built on Apr 13 2015 07:14:27
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@interstellar ~]# rpm -qa|grep gluster
glusterfs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-resource-agents-3.7dev-0.994.gitf522001.el6.noarch
glusterfs-cli-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-fuse-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-extra-xlators-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-rdma-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-debuginfo-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-libs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-geo-replication-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-server-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-regression-tests-3.7dev-0.994.gitf522001.el6.x86_64


Steps to Reproduce:
===================
1.created a 3x dist-rep volume
2.start and issue status of volume
[root@interstellar ~]# gluster v status rep3
Status of volume: rep3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick ninja:/rhs/brick1/rep3a               49234     0          Y       14452
Brick interstellar:/rhs/brick1/rep3a        49187     0          Y       60206
Brick transformers:/rhs/brick1/rep3a        49172     0          Y       14930
Brick interstellar:/rhs/brick1/rep3b        49188     0          Y       60223
Brick ninja:/rhs/brick1/rep3b               49235     0          Y       14471
Brick transformers:/rhs/brick1/rep3b        49173     0          Y       14948
NFS Server on localhost                     N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       62245
NFS Server on 10.70.34.56                   N/A       N/A        N       N/A  
Self-heal Daemon on 10.70.34.56             N/A       N/A        Y       24601
NFS Server on ninja                         N/A       N/A        N       N/A  
Self-heal Daemon on ninja                   N/A       N/A        Y       14898
NFS Server on transformers                  N/A       N/A        N       N/A  
Self-heal Daemon on transformers            N/A       N/A        Y       15357
 
Task Status of Volume rep3
------------------------------------------------------------------------------
There are no active volume tasks
 

3. now attach a tier and reissue the command, it can be seen that the self deamons are not showing up
[root@interstellar ~]# gluster v attach-tier rep3 replica 3 ninja:/rhs/brick1/rep3-tier interstellar:/rhs/brick1/rep3-tier transformers:/rhs/brick1/rep3-tier 
volume add-brick: success
[root@interstellar ~]# gluster v status rep3
Status of volume: rep3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick transformers:/rhs/brick1/rep3-tier    49175     0          Y       15496
Brick interstellar:/rhs/brick1/rep3-tier    49190     0          Y       62447
Brick ninja:/rhs/brick1/rep3-tier           49237     0          Y       15080
Brick ninja:/rhs/brick1/rep3a               49234     0          Y       14452
Brick interstellar:/rhs/brick1/rep3a        49187     0          Y       60206
Brick transformers:/rhs/brick1/rep3a        49172     0          Y       14930
Brick interstellar:/rhs/brick1/rep3b        49188     0          Y       60223
Brick ninja:/rhs/brick1/rep3b               49235     0          Y       14471
Brick transformers:/rhs/brick1/rep3b        49173     0          Y       14948
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on ninja                         N/A       N/A        N       N/A  
NFS Server on 10.70.34.56                   N/A       N/A        N       N/A  
NFS Server on transformers                  N/A       N/A        N       N/A  
 
Task Status of Volume rep3
------------------------------------------------------------------------------
There are no active volume tasks



For  more logs refer to bz#1212822

--- Additional comment from Dan Lambright on 2015-04-23 11:05:58 EDT ---

We do not support self healing with tiered volumes in V1. We will support it in the future. Marking as deferred.

--- Additional comment from nchilaka on 2015-11-05 07:19:28 EST ---

I think now we should be supporting this, right?

Note You need to log in before you can comment on or make changes to this bug.