Bug 1276587 - [GlusterD]: After updating one of rhgs 2.1.6 node to 3.1.2 in two node cluster, volume status is failing
Summary: [GlusterD]: After updating one of rhgs 2.1.6 node to 3.1.2 in two node cluste...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.1.2
Assignee: Gaurav Kumar Garg
QA Contact: Byreddy
URL:
Whiteboard: glusterd
: 1277791 (view as bug list)
Depends On:
Blocks: 1260783 1267488
TreeView+ depends on / blocked
 
Reported: 2015-10-30 08:55 UTC by Byreddy
Modified: 2016-03-01 05:49 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.7.5-6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-03-01 05:49:15 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 10:20:36 UTC

Description Byreddy 2015-10-30 08:55:33 UTC
Description of problem:
=======================
After updating one of two node cluster from RHGS 2.1.6 to 3.1.2, volume status on both the nodes (updated & not updated ) failing with error message:
***
[root@ ~]# gluster volume status
Staging failed on host_name. Please check log file for details.    //host_name = other node IP
***

GlusterD Log error:
===================
[2015-10-30 08:32:13.663409] E [MSGID: 106524] [glusterd-op-sm.c:1808:glusterd_op_stage_stats_volume] 0-glusterd: Volume name get failed
[2015-10-30 08:32:13.663521] E [MSGID: 106301] [glusterd-op-sm.c:5214:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Profile', Status : -2


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.5-5


How reproducible:
=================
Always


Steps to Reproduce:
===================
1.Have two node cluster with rhgs 2.1.6
2.Created Distributed and replica volumes using both the nodes.
3.Update one of the node to 3.1.2 (glusterfs-3.7.5-5 )
4. Start the glusterd and check the volume status 

Actual results:
===============
Volume status failing with "Staging failed on host_name. Please check log file for details."  //host_name =IP of other node in the cluster


Expected results:
=================
Volume status should work and show all the volume details in the cluster


Additional info:

Comment 2 Byreddy 2015-10-30 09:14:38 UTC
Additional info for Debug:
==========================
After updating both the nodes, volume status worked good.

Comment 3 Gaurav Kumar Garg 2015-10-30 11:26:53 UTC
upstream patch for this bug available : http://review.gluster.org/#/c/12473/

Comment 4 Gaurav Kumar Garg 2015-10-30 11:28:03 UTC
 RCA:   
    Currently new feature tiering have GD_OP_DETACH_TIER and GD_OP_TIER_MIGRATE
    enum in the middle of the glusterd_op_ enum array. So when cluster have
    two node and one of the node upgrade from lower version to higher
    version and execute any command (which glusterd operation enum code value
    higher then GD_OP_DETACH_TIER and GD_OP_TIER_MIGRATE enum value) then
    first node (which have upgraded) will send respective executing command
    enum value + GD_OP_TIER_MIGRATE + GD_OP_DETACH_TIER value. When this
    enum value goes to 2nd node of the cluster (which have not upgraded yet
    and dont't have GD_OP_DETACH_TIER, GD_OP_TIER_MIGRATE enum) then 2nd
    node will pick up wrong command from the array based on the first node
    enum value and command executiong will fail.
    
    Fix is to put every new feature glusterd operation enum code to last of
    the array.

Comment 6 Gaurav Kumar Garg 2015-11-04 04:56:46 UTC
*** Bug 1277791 has been marked as a duplicate of this bug. ***

Comment 7 Gaurav Kumar Garg 2015-11-04 04:58:29 UTC
downstream patch for this bug: https://code.engineering.redhat.com/gerrit/#/c/60757/

Comment 8 Byreddy 2015-11-16 04:56:44 UTC
Verified this bug with the build **glusterfs-3.7.5-6***.

Fix is working good and no more seeing the issue described.

Moving the bug to verified state.

Comment 10 errata-xmlrpc 2016-03-01 05:49:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html


Note You need to log in before you can comment on or make changes to this bug.