Bug 980097 - afr: glustershd did not start on the newly added peer in cluster
afr: glustershd did not start on the newly added peer in cluster
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.1
x86_64 Linux
high Severity high
: ---
: ---
Assigned To: raghav
Rahul Hinduja
: Reopened
Depends On: 980468
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-01 08:19 EDT by Rahul Hinduja
Modified: 2013-09-23 18:29 EDT (History)
4 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.12rhs.beta2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-23 18:29:53 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rahul Hinduja 2013-07-01 08:19:43 EDT
Description of problem:
=======================

glustershd did not start on the newly added peer in cluster.

Version-Release number of selected component (if applicable):
=============================================================

glusterfs-geo-replication-3.4.0.12rhs.beta1-1.el6rhs.x86_64
glusterfs-3.4.0.12rhs.beta1-1.el6rhs.x86_64
glusterfs-server-3.4.0.12rhs.beta1-1.el6rhs.x86_64
glusterfs-rdma-3.4.0.12rhs.beta1-1.el6rhs.x86_64
glusterfs-fuse-3.4.0.12rhs.beta1-1.el6rhs.x86_64

Steps Carried:
==============
1. Created a cluster of two systems
2. Created 1*2 replicate volume out of cluster.
3. Self heal deamon started on both the systems
4. Probed new system to be part of cluster.
5. Peer probe is successful and new system is part of cluster.
6. Checked the glustershd process on the newly added system. It is not started.

Actual results:
================

[root@rhs-client13 ~]# ps -eaf | grep glustershd
root     11101  3607  0 17:51 pts/0    00:00:00 grep glustershd
[root@rhs-client13 ~]#

[root@rhs-client13 ~]# gluster volume status
Status of volume: vol-test
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.36.35:/rhs/brick1/r1			49152	Y	11004
Brick 10.70.36.36:/rhs/brick1/r2			49152	Y	10537
NFS Server on localhost					2049	Y	10481
Self-heal Daemon on localhost				N/A	N	N/A
NFS Server on c9ccfd62-1ae9-41f2-a04e-c604c431746f	2049	Y	11018
Self-heal Daemon on c9ccfd62-1ae9-41f2-a04e-c604c431746
f							N/A	Y	11022
NFS Server on 3c008bc0-520c-4414-bbcd-abb641117d62	2049	Y	10551
Self-heal Daemon on 3c008bc0-520c-4414-bbcd-abb641117d6
2							N/A	Y	10555
 
There are no active volume tasks
[root@rhs-client13 ~]# 



Expected results:
=================

glustershd should start on the newly added system in cluster.
Comment 3 raghav 2013-07-03 03:31:01 EDT

*** This bug has been marked as a duplicate of bug 980468 ***
Comment 4 raghav 2013-07-03 04:28:42 EDT
The fix for bug 980468 will also fix this bug. But since it is a different test scenario for QA reopening it. However 980468 has been marked as blocker for this bug.
Comment 6 Scott Haines 2013-09-23 18:29:53 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.