Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1543296 - After upgrade from RHGS-3.3.1 async(7.4) to RHGS-3.4(7.5) peer state went to peer rejected(conncted).
After upgrade from RHGS-3.3.1 async(7.4) to RHGS-3.4(7.5) peer state went to...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: tier (Show other bugs)
3.4
Unspecified Unspecified
unspecified Severity high
: ---
: RHGS 3.4.0
Assigned To: hari gowtham
Rajesh Madaka
:
Depends On:
Blocks: 1503137
  Show dependency treegraph
 
Reported: 2018-02-08 02:52 EST by Rajesh Madaka
Modified: 2018-09-04 02:43 EDT (History)
5 users (show)

See Also:
Fixed In Version: glusterfs-3.12.2-4
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-09-04 02:42:41 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 02:43 EDT

  None (edit)
Description Rajesh Madaka 2018-02-08 02:52:03 EST
Description of problem:

After upgrade from RHEL-7.4, RHGS-3.3.1 async  to RHEL-7.5, RHGS-3.4 peer state went to peer rejected(conncted).

Version-Release number of selected component (if applicable):

RHGS version:

from version glusterfs-3.8.4-54.el7 to glusterfs-3.12.2-3.el7

OS version:

from RHEL 7.4 to RHEL7.5

How reproducible:
 
100%

Steps to Reproduce:
1. Create 6 RHEL-7.4 machines.
2. Install RHGS-3.3.1 async build on RHEL-7.4 machines.
3. Then add firewall-services(glusterfs, nfs, rpc-bind) to all the cluster servers
4. Then perform peer probe from one node to remaining all 5 servers.
5. Now all servers peer status is in connected state.
6. Create around 50 volumes combination of two-way distributed-replica volumes, three way distributed-replica volumes, Arbitrated-replicate volumes, Distributed dispersed volumes.
7. Then mount 5 volumes to RHEL-7.4 client and 5 volumes RHEL-7.5 client.
8. Keep any 5 volumes in offline
9. Copy RHLE 7.5 repos and RHGS-3.4 repos into /etc/yum.repos.d
10. Stop glusterd, glusterfs, glusterfsd services of one node which is getting upgrade.
11. Then perform yum update of that particular node.
12. After upgrade completed successfuly, reboot the upgraded node.
13. once node is up check the peer status from upgrade node, peer status showing like "State: Peer Rejected (Connected)"

Actual results:

Node went to peer rejected state after upgrade like below:

State: Peer Rejected (Connected)

Expected results:

State: Peer in Cluster (Connected)



Additional info:
Comment 5 Rajesh Madaka 2018-02-15 09:17:50 EST
After upgrade from RHGS-3.3.1 async(7.4) to RHGS-3.4(7.5) gluster peer status showing correct state

Verified with below version

Version no : glusterfs-3.12.2-4

After performing upgrade from below version:

RHGS version:

from version glusterfs-3.8.4-54.el7 to glusterfs-3.12.2-3.el7

OS version:

from RHEL 7.4 to RHEL7.5

gluster peer status showing correct state like as below:

State: Peer in Cluster (Connected)

Moving bug to Verified state
Comment 7 errata-xmlrpc 2018-09-04 02:42:41 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607

Note You need to log in before you can comment on or make changes to this bug.