Bug 1276587 - [GlusterD]: After updating one of rhgs 2.1.6 node to 3.1.2 in two node cluster, volume status is failing
[GlusterD]: After updating one of rhgs 2.1.6 node to 3.1.2 in two node cluste...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
3.1
x86_64 Linux
unspecified Severity high
: ---
: RHGS 3.1.2
Assigned To: Gaurav Kumar Garg
Byreddy
glusterd
: ZStream
: 1277791 (view as bug list)
Depends On:
Blocks: 1260783 1267488
  Show dependency treegraph
 
Reported: 2015-10-30 04:55 EDT by Byreddy
Modified: 2016-03-01 00:49 EST (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-3.7.5-6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-03-01 00:49:15 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Byreddy 2015-10-30 04:55:33 EDT
Description of problem:
=======================
After updating one of two node cluster from RHGS 2.1.6 to 3.1.2, volume status on both the nodes (updated & not updated ) failing with error message:
***
[root@ ~]# gluster volume status
Staging failed on host_name. Please check log file for details.    //host_name = other node IP
***

GlusterD Log error:
===================
[2015-10-30 08:32:13.663409] E [MSGID: 106524] [glusterd-op-sm.c:1808:glusterd_op_stage_stats_volume] 0-glusterd: Volume name get failed
[2015-10-30 08:32:13.663521] E [MSGID: 106301] [glusterd-op-sm.c:5214:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Profile', Status : -2


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.5-5


How reproducible:
=================
Always


Steps to Reproduce:
===================
1.Have two node cluster with rhgs 2.1.6
2.Created Distributed and replica volumes using both the nodes.
3.Update one of the node to 3.1.2 (glusterfs-3.7.5-5 )
4. Start the glusterd and check the volume status 

Actual results:
===============
Volume status failing with "Staging failed on host_name. Please check log file for details."  //host_name =IP of other node in the cluster


Expected results:
=================
Volume status should work and show all the volume details in the cluster


Additional info:
Comment 2 Byreddy 2015-10-30 05:14:38 EDT
Additional info for Debug:
==========================
After updating both the nodes, volume status worked good.
Comment 3 Gaurav Kumar Garg 2015-10-30 07:26:53 EDT
upstream patch for this bug available : http://review.gluster.org/#/c/12473/
Comment 4 Gaurav Kumar Garg 2015-10-30 07:28:03 EDT
 RCA:   
    Currently new feature tiering have GD_OP_DETACH_TIER and GD_OP_TIER_MIGRATE
    enum in the middle of the glusterd_op_ enum array. So when cluster have
    two node and one of the node upgrade from lower version to higher
    version and execute any command (which glusterd operation enum code value
    higher then GD_OP_DETACH_TIER and GD_OP_TIER_MIGRATE enum value) then
    first node (which have upgraded) will send respective executing command
    enum value + GD_OP_TIER_MIGRATE + GD_OP_DETACH_TIER value. When this
    enum value goes to 2nd node of the cluster (which have not upgraded yet
    and dont't have GD_OP_DETACH_TIER, GD_OP_TIER_MIGRATE enum) then 2nd
    node will pick up wrong command from the array based on the first node
    enum value and command executiong will fail.
    
    Fix is to put every new feature glusterd operation enum code to last of
    the array.
Comment 6 Gaurav Kumar Garg 2015-11-03 23:56:46 EST
*** Bug 1277791 has been marked as a duplicate of this bug. ***
Comment 7 Gaurav Kumar Garg 2015-11-03 23:58:29 EST
downstream patch for this bug: https://code.engineering.redhat.com/gerrit/#/c/60757/
Comment 8 Byreddy 2015-11-15 23:56:44 EST
Verified this bug with the build **glusterfs-3.7.5-6***.

Fix is working good and no more seeing the issue described.

Moving the bug to verified state.
Comment 10 errata-xmlrpc 2016-03-01 00:49:15 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html

Note You need to log in before you can comment on or make changes to this bug.