Bug 1248895 - [upgrade] After in-service software upgrade from RHGS 2.1.6 to RHGS 3.1, probing a new RHGS 3.1 node is moving the peer to rejected state
[upgrade] After in-service software upgrade from RHGS 2.1.6 to RHGS 3.1, prob...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
3.1
x86_64 Linux
unspecified Severity urgent
: ---
: RHGS 3.1.2
Assigned To: Gaurav Kumar Garg
Byreddy
glusterd
: ZStream
Depends On: 1276541 1283035 1283178 1283187
Blocks: 1260783 1262793 1262805 1277823
  Show dependency treegraph
 
Reported: 2015-07-31 02:40 EDT by SATHEESARAN
Modified: 2016-03-01 00:33 EST (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.7.5-5
Doc Type: Bug Fix
Doc Text:
Previously, after an upgrade, op-version is expected to be updated through gluster volume set. If the new version introduces any feature which changes volinfo structure without storing the default values of these new options would result into checksum issues. With this fix, after upgrade it will keep the consistency in volinfo file and peer probe will be successful.
Story Points: ---
Clone Of:
: 1262793 (view as bug list)
Environment:
Last Closed: 2016-03-01 00:33:17 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
sosreport from dhcp37-51 (7.34 MB, application/x-xz)
2015-07-31 03:04 EDT, SATHEESARAN
no flags Details
sosreport from dhcp37-142 (7.42 MB, application/x-xz)
2015-07-31 03:18 EDT, SATHEESARAN
no flags Details

  None (edit)
Description SATHEESARAN 2015-07-31 02:40:04 EDT
Description of problem:
-----------------------
The gluster cluster as we know as 'Trusted Storage Pool' has 2 nodes running RHGS 2.1. The nodes have replica 2 volumes

These 2 nodes were upgraded to RHGS 3.1 using 'in-service software upgrade' procedure. Upgrade was successful. When the new RHGS 3.1 node installed through ISO, was probed from the already existing cluster, this new node was 'rejected'

To summarize, new RHGS 3.1 nodes couldn't be added to the cluster which was upgraded from RHGS 2.1 to RHGS 3.1

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHGS 2.1 Update6 ( glusterfs-3.4.0.72-1.el6rhs )
RHGS 3.1 ( glusterfs-3.7.1-11.el6rhs )

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Install 2 servers with RHGS 2.1
2. After successful installation create a gluster cluster 'Trusted Storage Pool'
3. Create a replica volume and start it
4. Perform 'in-service software upgrade' from RHGS 2.1 to RHGS 3.1
5. After both the nodes are upgraded, probe( add ) a new RHGS 3.1 node (installed through ISO )

Actual results:
---------------
Newly probed node goes to ** rejected ** state

Expected results:
-----------------
New probe should added as a part of the upgraded cluster
Comment 2 SATHEESARAN 2015-07-31 03:04:39 EDT
Created attachment 1057933 [details]
sosreport from dhcp37-51
Comment 3 SATHEESARAN 2015-07-31 03:18:59 EDT
Created attachment 1057937 [details]
sosreport from dhcp37-142
Comment 4 Anand Nekkunti 2015-10-05 01:35:54 EDT
Upstream patch: http://review.gluster.org/#/c/12171/
Comment 7 Anand Nekkunti 2015-10-28 02:57:13 EDT
Downstream patch: https://code.engineering.redhat.com/gerrit/#/c/60332/
Comment 8 Byreddy 2015-10-30 07:48:05 EDT
Verified this bug with latest version of 3.1.2 (glusterfs-3.7.5-5).

Issue still remains same.

Based on discussion with Anand, came to know the dependecy bug which need to be fixed to verify this bug - https://bugzilla.redhat.com/show_bug.cgi?id=1276541

So verification of this bug will be in wait state untill above mentioned bug get addressed.
Comment 9 Byreddy 2015-11-18 05:16:52 EST
Still i am seeing the original issue  with latest rhgs version ( glusterfs-3.7.5-6 ) that is, after upgrading the 2.1.6 cluster to 3.1.2, peer status after peer probing a new 3.1.2 node is showing "Rejected"

This bug depends on https://bugzilla.redhat.com/show_bug.cgi?id=1283035

Currently verification of this bug is blocked with the available rhgs version  ***glusterfs-3.7.5-6***
Comment 10 Anand Nekkunti 2015-11-20 03:08:45 EST
Byreddy 
 Bump-up op-version is required after upgrade all nodes.
Could you re-test this by this.
Comment 11 Byreddy 2015-11-20 04:07:01 EST
Verified based on Anand comment, that is by bumping up the op-version , every thing worked well with RHGS version...glusterfs-3.7.5-6

Moving to verified state.
Comment 13 errata-xmlrpc 2016-03-01 00:33:17 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html

Note You need to log in before you can comment on or make changes to this bug.