Bug 980097 - afr: glustershd did not start on the newly added peer in cluster
Summary: afr: glustershd did not start on the newly added peer in cluster
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: raghav
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On: 980468
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-01 12:19 UTC by Rahul Hinduja
Modified: 2013-09-23 22:29 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.4.0.12rhs.beta2
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-09-23 22:29:53 UTC
Embargoed:


Attachments (Terms of Use)

Description Rahul Hinduja 2013-07-01 12:19:43 UTC
Description of problem:
=======================

glustershd did not start on the newly added peer in cluster.

Version-Release number of selected component (if applicable):
=============================================================

glusterfs-geo-replication-3.4.0.12rhs.beta1-1.el6rhs.x86_64
glusterfs-3.4.0.12rhs.beta1-1.el6rhs.x86_64
glusterfs-server-3.4.0.12rhs.beta1-1.el6rhs.x86_64
glusterfs-rdma-3.4.0.12rhs.beta1-1.el6rhs.x86_64
glusterfs-fuse-3.4.0.12rhs.beta1-1.el6rhs.x86_64

Steps Carried:
==============
1. Created a cluster of two systems
2. Created 1*2 replicate volume out of cluster.
3. Self heal deamon started on both the systems
4. Probed new system to be part of cluster.
5. Peer probe is successful and new system is part of cluster.
6. Checked the glustershd process on the newly added system. It is not started.

Actual results:
================

[root@rhs-client13 ~]# ps -eaf | grep glustershd
root     11101  3607  0 17:51 pts/0    00:00:00 grep glustershd
[root@rhs-client13 ~]#

[root@rhs-client13 ~]# gluster volume status
Status of volume: vol-test
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.36.35:/rhs/brick1/r1			49152	Y	11004
Brick 10.70.36.36:/rhs/brick1/r2			49152	Y	10537
NFS Server on localhost					2049	Y	10481
Self-heal Daemon on localhost				N/A	N	N/A
NFS Server on c9ccfd62-1ae9-41f2-a04e-c604c431746f	2049	Y	11018
Self-heal Daemon on c9ccfd62-1ae9-41f2-a04e-c604c431746
f							N/A	Y	11022
NFS Server on 3c008bc0-520c-4414-bbcd-abb641117d62	2049	Y	10551
Self-heal Daemon on 3c008bc0-520c-4414-bbcd-abb641117d6
2							N/A	Y	10555
 
There are no active volume tasks
[root@rhs-client13 ~]# 



Expected results:
=================

glustershd should start on the newly added system in cluster.

Comment 3 raghav 2013-07-03 07:31:01 UTC

*** This bug has been marked as a duplicate of bug 980468 ***

Comment 4 raghav 2013-07-03 08:28:42 UTC
The fix for bug 980468 will also fix this bug. But since it is a different test scenario for QA reopening it. However 980468 has been marked as blocker for this bug.

Comment 6 Scott Haines 2013-09-23 22:29:53 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html


Note You need to log in before you can comment on or make changes to this bug.