Bug 1543296 - After upgrade from RHGS-3.3.1 async(7.4) to RHGS-3.4(7.5) peer state went to peer rejected(conncted).
Summary: After upgrade from RHGS-3.3.1 async(7.4) to RHGS-3.4(7.5) peer state went to...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.4.0
Assignee: hari gowtham
QA Contact: Rajesh Madaka
URL:
Whiteboard:
Depends On:
Blocks: 1503137
TreeView+ depends on / blocked
 
Reported: 2018-02-08 07:52 UTC by Rajesh Madaka
Modified: 2018-09-04 06:43 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.12.2-4
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-04 06:42:41 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 0 None None None 2018-09-04 06:43:59 UTC

Description Rajesh Madaka 2018-02-08 07:52:03 UTC
Description of problem:

After upgrade from RHEL-7.4, RHGS-3.3.1 async  to RHEL-7.5, RHGS-3.4 peer state went to peer rejected(conncted).

Version-Release number of selected component (if applicable):

RHGS version:

from version glusterfs-3.8.4-54.el7 to glusterfs-3.12.2-3.el7

OS version:

from RHEL 7.4 to RHEL7.5

How reproducible:
 
100%

Steps to Reproduce:
1. Create 6 RHEL-7.4 machines.
2. Install RHGS-3.3.1 async build on RHEL-7.4 machines.
3. Then add firewall-services(glusterfs, nfs, rpc-bind) to all the cluster servers
4. Then perform peer probe from one node to remaining all 5 servers.
5. Now all servers peer status is in connected state.
6. Create around 50 volumes combination of two-way distributed-replica volumes, three way distributed-replica volumes, Arbitrated-replicate volumes, Distributed dispersed volumes.
7. Then mount 5 volumes to RHEL-7.4 client and 5 volumes RHEL-7.5 client.
8. Keep any 5 volumes in offline
9. Copy RHLE 7.5 repos and RHGS-3.4 repos into /etc/yum.repos.d
10. Stop glusterd, glusterfs, glusterfsd services of one node which is getting upgrade.
11. Then perform yum update of that particular node.
12. After upgrade completed successfuly, reboot the upgraded node.
13. once node is up check the peer status from upgrade node, peer status showing like "State: Peer Rejected (Connected)"

Actual results:

Node went to peer rejected state after upgrade like below:

State: Peer Rejected (Connected)

Expected results:

State: Peer in Cluster (Connected)



Additional info:

Comment 5 Rajesh Madaka 2018-02-15 14:17:50 UTC
After upgrade from RHGS-3.3.1 async(7.4) to RHGS-3.4(7.5) gluster peer status showing correct state

Verified with below version

Version no : glusterfs-3.12.2-4

After performing upgrade from below version:

RHGS version:

from version glusterfs-3.8.4-54.el7 to glusterfs-3.12.2-3.el7

OS version:

from RHEL 7.4 to RHEL7.5

gluster peer status showing correct state like as below:

State: Peer in Cluster (Connected)

Moving bug to Verified state

Comment 7 errata-xmlrpc 2018-09-04 06:42:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.