Description of problem: On 6 node cluster create all types of volumes, distribute, replicate, arbiter, disperse, distribute-replicate, distribute-disperse, start them and mount. Stop distribute volume. Perform in-service upgrade by stoping all gluster processes in the first node N1 After upgrade few nodes are in Peer rejected state. Version-Release number of selected component (if applicable): Upgrade from glusterfs-3.8.4-54.15.el7rhgs.x86_64 to glusterfs-6.0-20.el7rhgs.x86_64 How reproducible: 1/1 Steps to Reproduce: 1. Form cluster with 6 nodes 2. Create all types volumes and start them 3. Mount all the volumes on two clients and start IO 4. Stop distribute volume 5. Perform in-service upgrade on the first node say N1 systemctl stop glusterd; pkill glusterfsd pkill glusterfs Actual results: After upgrade performed reboot and peers are in rejected state Expected results: After performing reboot after upgrade, peers should be in connected state. Additional info:
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:3249