Bug 1248895

Summary: [upgrade] After in-service software upgrade from RHGS 2.1.6 to RHGS 3.1, probing a new RHGS 3.1 node is moving the peer to rejected state
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: SATHEESARAN <sasundar>
Component: glusterdAssignee: Gaurav Kumar Garg <ggarg>
Status: CLOSED ERRATA QA Contact: Byreddy <bsrirama>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: bkunal, bmohanra, bsrirama, byarlaga, nlevinki, pprakash, rcyriac, smohan, vbellur
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.1.2   
Hardware: x86_64   
OS: Linux   
Whiteboard: glusterd
Fixed In Version: glusterfs-3.7.5-5 Doc Type: Bug Fix
Doc Text:
Previously, after an upgrade, op-version is expected to be updated through gluster volume set. If the new version introduces any feature which changes volinfo structure without storing the default values of these new options would result into checksum issues. With this fix, after upgrade it will keep the consistency in volinfo file and peer probe will be successful.
Story Points: ---
Clone Of:
: 1262793 (view as bug list) Environment:
Last Closed: 2016-03-01 05:33:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1276541, 1283035, 1283178, 1283187    
Bug Blocks: 1260783, 1262793, 1262805, 1277823    
Attachments:
Description Flags
sosreport from dhcp37-51
none
sosreport from dhcp37-142 none

Description SATHEESARAN 2015-07-31 06:40:04 UTC
Description of problem:
-----------------------
The gluster cluster as we know as 'Trusted Storage Pool' has 2 nodes running RHGS 2.1. The nodes have replica 2 volumes

These 2 nodes were upgraded to RHGS 3.1 using 'in-service software upgrade' procedure. Upgrade was successful. When the new RHGS 3.1 node installed through ISO, was probed from the already existing cluster, this new node was 'rejected'

To summarize, new RHGS 3.1 nodes couldn't be added to the cluster which was upgraded from RHGS 2.1 to RHGS 3.1

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHGS 2.1 Update6 ( glusterfs-3.4.0.72-1.el6rhs )
RHGS 3.1 ( glusterfs-3.7.1-11.el6rhs )

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Install 2 servers with RHGS 2.1
2. After successful installation create a gluster cluster 'Trusted Storage Pool'
3. Create a replica volume and start it
4. Perform 'in-service software upgrade' from RHGS 2.1 to RHGS 3.1
5. After both the nodes are upgraded, probe( add ) a new RHGS 3.1 node (installed through ISO )

Actual results:
---------------
Newly probed node goes to ** rejected ** state

Expected results:
-----------------
New probe should added as a part of the upgraded cluster

Comment 2 SATHEESARAN 2015-07-31 07:04:39 UTC
Created attachment 1057933 [details]
sosreport from dhcp37-51

Comment 3 SATHEESARAN 2015-07-31 07:18:59 UTC
Created attachment 1057937 [details]
sosreport from dhcp37-142

Comment 4 Anand Nekkunti 2015-10-05 05:35:54 UTC
Upstream patch: http://review.gluster.org/#/c/12171/

Comment 7 Anand Nekkunti 2015-10-28 06:57:13 UTC
Downstream patch: https://code.engineering.redhat.com/gerrit/#/c/60332/

Comment 8 Byreddy 2015-10-30 11:48:05 UTC
Verified this bug with latest version of 3.1.2 (glusterfs-3.7.5-5).

Issue still remains same.

Based on discussion with Anand, came to know the dependecy bug which need to be fixed to verify this bug - https://bugzilla.redhat.com/show_bug.cgi?id=1276541

So verification of this bug will be in wait state untill above mentioned bug get addressed.

Comment 9 Byreddy 2015-11-18 10:16:52 UTC
Still i am seeing the original issue  with latest rhgs version ( glusterfs-3.7.5-6 ) that is, after upgrading the 2.1.6 cluster to 3.1.2, peer status after peer probing a new 3.1.2 node is showing "Rejected"

This bug depends on https://bugzilla.redhat.com/show_bug.cgi?id=1283035

Currently verification of this bug is blocked with the available rhgs version  ***glusterfs-3.7.5-6***

Comment 10 Anand Nekkunti 2015-11-20 08:08:45 UTC
Byreddy 
 Bump-up op-version is required after upgrade all nodes.
Could you re-test this by this.

Comment 11 Byreddy 2015-11-20 09:07:01 UTC
Verified based on Anand comment, that is by bumping up the op-version , every thing worked well with RHGS version...glusterfs-3.7.5-6

Moving to verified state.

Comment 13 errata-xmlrpc 2016-03-01 05:33:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html