Description of problem: ======================= glustershd did not start on the newly added peer in cluster. Version-Release number of selected component (if applicable): ============================================================= glusterfs-geo-replication-3.4.0.12rhs.beta1-1.el6rhs.x86_64 glusterfs-3.4.0.12rhs.beta1-1.el6rhs.x86_64 glusterfs-server-3.4.0.12rhs.beta1-1.el6rhs.x86_64 glusterfs-rdma-3.4.0.12rhs.beta1-1.el6rhs.x86_64 glusterfs-fuse-3.4.0.12rhs.beta1-1.el6rhs.x86_64 Steps Carried: ============== 1. Created a cluster of two systems 2. Created 1*2 replicate volume out of cluster. 3. Self heal deamon started on both the systems 4. Probed new system to be part of cluster. 5. Peer probe is successful and new system is part of cluster. 6. Checked the glustershd process on the newly added system. It is not started. Actual results: ================ [root@rhs-client13 ~]# ps -eaf | grep glustershd root 11101 3607 0 17:51 pts/0 00:00:00 grep glustershd [root@rhs-client13 ~]# [root@rhs-client13 ~]# gluster volume status Status of volume: vol-test Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.36.35:/rhs/brick1/r1 49152 Y 11004 Brick 10.70.36.36:/rhs/brick1/r2 49152 Y 10537 NFS Server on localhost 2049 Y 10481 Self-heal Daemon on localhost N/A N N/A NFS Server on c9ccfd62-1ae9-41f2-a04e-c604c431746f 2049 Y 11018 Self-heal Daemon on c9ccfd62-1ae9-41f2-a04e-c604c431746 f N/A Y 11022 NFS Server on 3c008bc0-520c-4414-bbcd-abb641117d62 2049 Y 10551 Self-heal Daemon on 3c008bc0-520c-4414-bbcd-abb641117d6 2 N/A Y 10555 There are no active volume tasks [root@rhs-client13 ~]# Expected results: ================= glustershd should start on the newly added system in cluster.
*** This bug has been marked as a duplicate of bug 980468 ***
The fix for bug 980468 will also fix this bug. But since it is a different test scenario for QA reopening it. However 980468 has been marked as blocker for this bug.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html