Previously, when a cluster had multiple volumes where the first volume in the volume list is not a replicated volume, and any of the other volumes is a replicated volume, after a reboot of a node, shd does not start. With this fix, shd will start in this scenario.
Description
-----------
Performed a in-service software upgrade from RHGS 2.1 to RHGS 3.1
After upgrade, self-heal daemon is not coming up.
Version
--------
RHGS 2.1 Update6
RHGS 3.1 (RC4)
How reproducible
-----------------
3/3 - always
Steps to reproduce
-------------------
1. Install RHGS 2.1 Update 6 ( glusterfs-3.4.0.72-1.el6rhs )
2. Create a replicate volume, distributed replicate volume and start them
3. Fuse mount the volumes and add few files
4. Migrate from RHN Classic to subscription manager (RHSM)
5. Use CDN stage and do yum In-service software upgrade to RHGS 3.1
6. After update, reboot the machine
7. Perform self-heal
Actual Result
--------------
Self-heal daemon is not running on the node which was upgraded from RHGS 2.1 to RHGS 3.1
Expected Result
---------------
Self-daemon should be up and running, post upgrade and reboot
Verified this Bug using the RHGS version - "glusterfs-3.7.1-13".
Step used to verify it:
~~~~~~~~~~~~~~~~~~~~~~
1. Created & started Distributed and then replicated volumes using cluster of two nodes.
[root@node3 ~]# gluster v list
Dis
replica1
[root@node3 ~]#
2. Rebooted the nodes
3. After reboot observed that *shd* was running successfully without any problem.
Fix is working fine, moving this bug to next state.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://rhn.redhat.com/errata/RHSA-2015-1845.html