Bug 1231144 - Data Tiering; Self heal deamon stops showing up in "vol status" once attach tier is done
Summary: Data Tiering; Self heal deamon stops showing up in "vol status" once attach t...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
urgent
medium
Target Milestone: ---
: RHGS 3.1.2
Assignee: Mohammed Rafi KC
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
: 1278413 (view as bug list)
Depends On:
Blocks: qe_tracker_everglades glusterfs-3.7.0 1260783 1260923
TreeView+ depends on / blocked
 
Reported: 2015-06-12 09:11 UTC by Nag Pavan Chilakam
Modified: 2016-09-17 15:41 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.7.5-6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-03-01 05:26:02 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 10:20:36 UTC

Description Nag Pavan Chilakam 2015-06-12 09:11:16 UTC
Description of problem:
======================
Once I attach a tier it was observed that the vol status doesnt show self healt deamon anymore.
I have seen the logs of glustershd and others , I dont see any process getting killed or restarted. Hence not a very serious bug, but will be a thing annoying to see some info missing in status command.



 

Version-Release number of selected component (if applicable):
============================================================
[root@rhsqa14-vm4 glusterfs]# gluster --version
glusterfs 3.7.1 built on Jun 12 2015 00:21:18
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@rhsqa14-vm4 glusterfs]# rpm -qa|grep gluster
glusterfs-libs-3.7.1-2.el6rhs.x86_64
glusterfs-cli-3.7.1-2.el6rhs.x86_64
glusterfs-rdma-3.7.1-2.el6rhs.x86_64
glusterfs-3.7.1-2.el6rhs.x86_64
glusterfs-api-3.7.1-2.el6rhs.x86_64
glusterfs-fuse-3.7.1-2.el6rhs.x86_64
glusterfs-server-3.7.1-2.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-2.el6rhs.x86_64
glusterfs-debuginfo-3.7.1-2.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-2.el6rhs.x86_64
[root@rhsqa14-vm4 glusterfs]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /selinux
Current mode:                   enforcing
Mode from config file:          enforcing
Policy version:                 24
Policy from config file:        targeted
[root@rhsqa14-vm4 glusterfs]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.7 Beta (Santiago)



How reproducible:
================
very easily


Steps to Reproduce:
==================
1.create a dist-rep volume and start it
2.issue a status of the volume and note the self heal deamon status
3.attach a tier and resissue vol status command

It can be seen that the self heal deamon fails to show up


Expected results:
================
self heal deamosn should show in status even for tiered volume


[root@rhsqa14-vm4 glusterfs]# gluster v create distrep2 replica 2 10.70.47.159:/rhs/brick1/distrep2 10.70.46.2:/rhs/brick1/distrep2  10.70.47.159:/rhs/brick2/distrep2 10.70.46.2:/rhs/brick2/distrep2 
volume create: distrep2: success: please start the volume to access data
[root@rhsqa14-vm4 glusterfs]# gluster v info distrep2
 
Volume Name: distrep2
Type: Distributed-Replicate
Volume ID: c9e2791c-5e81-48e8-8a11-5a2efbdd26da
Status: Created
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/distrep2
Brick2: 10.70.46.2:/rhs/brick1/distrep2
Brick3: 10.70.47.159:/rhs/brick2/distrep2
Brick4: 10.70.46.2:/rhs/brick2/distrep2
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm4 glusterfs]# gluster v start distrep2
volume start: distrep2: success
[root@rhsqa14-vm4 glusterfs]# gluster v status distrep2
Status of volume: distrep2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.47.159:/rhs/brick1/distrep2     49155     0          Y       16387
Brick 10.70.46.2:/rhs/brick1/distrep2       49155     0          Y       6250 
Brick 10.70.47.159:/rhs/brick2/distrep2     49156     0          Y       16405
Brick 10.70.46.2:/rhs/brick2/distrep2       49156     0          Y       6268 
NFS Server on localhost                     2049      0          Y       6287 
Self-heal Daemon on localhost               N/A       N/A        Y       6298 
NFS Server on 10.70.47.159                  2049      0          Y       16424
Self-heal Daemon on 10.70.47.159            N/A       N/A        Y       16444
 
Task Status of Volume distrep2
------------------------------------------------------------------------------
There are no active volume tasks
[root@rhsqa14-vm4 glusterfs]# gluster v attach-tier distrep2 10.70.47.159:/rhs/brick3/distrep2 10.70.46.2:/rhs/brick3/distrep2
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: success
gluster v status distrep2volume rebalance: distrep2: success: Rebalance on distrep2 has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 3d0a7f90-de10-4e5d-81ca-63fa0fc41db7

[root@rhsqa14-vm4 glusterfs]# gluster v status distrep2
Status of volume: distrep2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.46.2:/rhs/brick3/distrep2       49157     0          Y       6434 
Brick 10.70.47.159:/rhs/brick3/distrep2     49157     0          Y       16501
Cold Bricks:
Brick 10.70.47.159:/rhs/brick1/distrep2     49155     0          Y       16387
Brick 10.70.46.2:/rhs/brick1/distrep2       49155     0          Y       6250 
Brick 10.70.47.159:/rhs/brick2/distrep2     49156     0          Y       16405
Brick 10.70.46.2:/rhs/brick2/distrep2       49156     0          Y       6268 
NFS Server on localhost                     2049      0          Y       6453 
NFS Server on 10.70.47.159                  2049      0          Y       16522
 
Task Status of Volume distrep2
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : 3d0a7f90-de10-4e5d-81ca-63fa0fc41db7
Status               : in progress

Comment 3 Vivek Agarwal 2015-11-06 11:37:21 UTC
*** Bug 1278413 has been marked as a duplicate of this bug. ***

Comment 6 Nag Pavan Chilakam 2015-11-10 11:35:14 UTC
works now:
moving it to verified:

[root@zod ~]# gluster v status ecx
Status of volume: ecx
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick yarrow:/rhs/brick6/ecx_hot            49168     0          Y       27301
Brick zod:/rhs/brick6/ecx_hot               49168     0          Y       17176
Brick yarrow:/rhs/brick7/ecx_hot            49167     0          Y       27281
Brick zod:/rhs/brick7/ecx_hot               49167     0          Y       17158
Cold Bricks:
Brick zod:/rhs/brick1/ecx                   49162     0          Y       15768
Brick yarrow:/rhs/brick1/ecx                49162     0          Y       25762
Brick zod:/rhs/brick2/ecx                   49163     0          Y       15786
Brick yarrow:/rhs/brick2/ecx                49163     0          Y       25782
Brick zod:/rhs/brick3/ecx                   49164     0          Y       15804
Brick yarrow:/rhs/brick3/ecx                49164     0          Y       25804
NFS Server on localhost                     N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       17203
NFS Server on yarrow                        N/A       N/A        N       N/A  
Self-heal Daemon on yarrow                  N/A       N/A        Y       27351
 
Task Status of Volume ecx
------------------------------------------------------------------------------
Task                 : Tier migration      
ID                   : b658e3ad-0d7c-457e-9611-87e176cc950d
Status               : in progress         




[root@zod ~]# rpm -qa|grep gluster
glusterfs-libs-3.7.5-6.el7rhgs.x86_64
glusterfs-fuse-3.7.5-6.el7rhgs.x86_64
glusterfs-3.7.5-6.el7rhgs.x86_64
glusterfs-server-3.7.5-6.el7rhgs.x86_64
glusterfs-client-xlators-3.7.5-6.el7rhgs.x86_64
glusterfs-cli-3.7.5-6.el7rhgs.x86_64
glusterfs-api-3.7.5-6.el7rhgs.x86_64
glusterfs-debuginfo-3.7.5-6.el7rhgs.x86_64
[root@zod ~]#

Comment 8 errata-xmlrpc 2016-03-01 05:26:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html


Note You need to log in before you can comment on or make changes to this bug.